As is with software developing, actually writing the stuff down is the easiest part of the work. If you already have someone fact checking and editing… why do you need AI to shit out crap just for the writing? It would be easier to gather the facts first, fact check them, then wrangle them through the AI if you don’t want to hire a writer (+ another pass for editing).
LLMs look like magic on a glance, but people thinking they are going to produce high quality content (or code for god’s sake) are delusional.
Yeah. I’m a programmer. Everyone has been telling me that I’m about to be out of a job any day now because the “AI” is coming for me. I’m really not worried. It’s way harder to correct bad code than it is to just throw it all away and start fresh, and I can’t even imagine how difficult it’s going to be to try to debug whatever garbage some “AI” has spewed out. If you employ a dozen programmers now, if you start using AI to generate your code you’re going to need two dozen programmers to debug and fix it’s output.
The promise with “AI” (more accurately machine learning, as this is not AI) as far as code is concerned is as a sort of smart copy and paste, where you can take a chunk of code and say “duplicate this but with these changes”, and then verify and tweak its output. As a smart refactoring tool it shows a lot of promise, but it’s not like you’re going to sit down and go “write me an app” and suddenly it’s done. Well, unless you want Hello World, and even then I’m sure it would find a way to introduce a bug or two.
Yep, I’ve had plenty of discussion about this on here before. Which was a total waste of time, as idiots don’t listen to facts. They also just keep moving the goal posts.
One developer was like they use AI to do their job all the time, so I asked them how that works. Yeah, they “just” have to throw 20% of the garbage away that’s obviously wrong when writing small scripts, then it’s great!
Or another one who said AI is the only way for them to write code, because their main issue is getting the syntax right (dyslexic). When I told them that the syntax and actually writing the code is the easiest part of my job they shot back that they don’t care, they are going to continue “looking like a miracle worker” due to having AI spit out their scripts…
And yet another one that discussed at length how you obviously can’t magically expect AI to put the right things out. So we went to the topic of code reviews and I tried to tell them: Give a real developer 1000+ line pull requests (like the AI might spit out) and there is a chance of a snowball in hell you’ll get bug free code despite reviews. So now they argued: Duh, you give the AI small bite sized Jira tickets to work on, so you can review it! And if the pull request is too long you tell the AI to make a shorter more human readable one! And then we’re back to square one: The senior developer reviewing the mess of code could just write it faster and more correct themselves.
It’s exhausting how little understanding there is about LLMs and their limitations. They produce a ton of seemingly high quality stuff, but it’s never 100% correct.
It seems to mostly be replacing work that is both repetitive and pointless. I have it writing my contract letters, ‘executive white papers’, and proposals.
The contract letters I can use without edit. The white papers I need to usually redirect it, but the second or third output is good. The proposals it functionally does the job I’d have a co-op do… put stuff on paper so I can realize why it isn’t right, and then write to that. (For the ‘fluffy’ parts of engineering proposals, like the cover letters, I can also use it.)
And yet another one that discussed at length how you obviously can’t magically expect AI to put the right things out. So we went to the topic of code reviews and I tried to tell them: Give a real developer 1000+ line pull requests (like the AI might spit out) and there is a chance of a snowball in hell you’ll get bug free code despite reviews.
Arguably this is comparing apples and oranges here. I agree with you that code reviews aren’t going to be useful for evaluating a big code dump with no context. But I’d also say that a significant amount of software in the world is either written with no code review process or a process that just has a human spitting out the big code dump with no context.
The AI hype is definitely hype, but there’s enough truth there to justify some of the hand-wringing. The guy who told you he only has to throw away the 20% of the code that’s useless is still getting 100% of his work done with maybe 40% of the effort (i.e., very little effort to generate the first AI cut, 20% to figure out the stupid stuff, 20% to fix it). That’s a big enough impact to have significant ripples.
Might not matter. It seems like the way it’s going to go in the short term is that paranoia and economic populism are going to kill the whole thing anyway. We’re just going to effectively make it illegal to train on data. I think that’s both a mistake and a gross misrepresentation of things like copyright, but it seems like the way we’re headed.
Arguably this is comparing apples and oranges here. I agree with you that code reviews aren’t going to be useful for evaluating a big code dump with no context. But I’d also say that a significant amount of software in the world is either written with no code review process or a process that just has a human spitting out the big code dump with no context.
That’s not totally true. Even if a developer throws a massive pull request dump at you, there is a high chance the dev at least ran the program locally and tried it out (at least the happy path).
With AI the code might not even compile. Or it looks good at first glance, but has a disastrous bug in the logic (that is extremely easy to overlook).
As with most code: Writing it takes me maybe 10% of the time, if even that. The main problem is finding the right solution, covering edge cases and so on. And then you spend 190% of the time trying to fix a sneaky bug that got into the code, just because someone didn’t think of a certain case or didn’t pay attention. If AI throws 99% correct code at me it would probably take me longer to properly fix it than to just write it myself from scratch.
Devil’s advocate though. With things like 4GLs, it was still all on the human to come up with the detailed spec. Best case scenario was that you work very hard, write a lot of things down, generate the code, see that it didn’t work and then ???. That “???” at the end was you as the programmer sitting alone in a room trying to figure out what a non-responsive black box might wanted you to have said instead.
It’s qualitatively different if you can just talk to the black box as though it were a programmer. It’s less of a black box at that point. It understands your language, and it understands the code. So you can start with the spec, but when something inevitably doesn’t work, the “???” step doesn’t just come back to you figuring out with no help what you did wrong. You can ask it questions and make suggestions. You can run experiments. Today’s LLMs hit the wall pretty quick there, and maybe they always will. There’s certainly the viewpoint that “all they do is model text and they can’t really learn anything”.
I think that’s fundamentally wrong. I’m a pretty solid programmer. I have a PhD in Computer Science, and I’ve worked as a software engineer and an architect throughout a pretty long career. And everything I’ve ever learned has basically been through language. Through reading, writing, speaking, and listening to English and a few other languages. I think that to say that I can learn what I’ve learned, but it’s fundamentally impossible for a robot to learn it is to resort to mysticism. At some point, we will have AIs that can do what I do today. I think that’s inevitable.
Well, that particular conversation typically happens in relation to something like a business rules engine, or sometimes one of those drag and drop visual programming languages which everyone always touts as letting you get rid of programmers (but in reality just limits you to a really hard to work with programming language), but there is a lot of overlap with the current LLM based hype.
If we ever do get an actual AI, then yes, AI will probably end up writing most of the programs, although it’s possible programmers will still exist in some capacity maybe for the purpose of creating flow charts or something to hand to the AIs. But we’re a long way off from a true AI, so everyone acting like it’s going to happen any day now is as laughable as everyone promising cold fusion was going to happen any day now back in the 70s. Ironically I think we are more likely to see a workable cold fusion before we see true AI, some of the hot fusion experiments happening lately are very promising.
See, this is why I work mostly in Java and Rust and not English. I’ve got those down, but English is WAY harder. Who even came up with this language, it’s a complete mess, glad they’re not making programming languages… or maybe they are, quick see if English and JavaScript share any devs!
On a side note, I have used AI to help my programming, with some success. Smaller snippets and scripts (1-2 pages) is usually okay, but bigger than that is a big no no. Also, very nice for writing unit tests.
Haha, I know you’re mostly joking, but that comment about “English” creators not making programming languages is golden. Especially because most programming languages use keywords in English :)
Yeah, it was mostly meant as a joke since English doesn’t really have a creator (or at least not one alive today), but it evolved over a very long period. In terms of spelling there’s been some notable contributors, but in general it’s sort of a group effort. Then there’s JavaScript, which isn’t actually that bad with the exception of its very confusing scoping and type coercion rules. The scoping thing is really just a side effect of mixing OO and Functional paradigms together, and the type coercion while well-intentioned, is terribly implemented. If you removed type coercion from JS, and the this keyword, you’d pretty much eliminate every single one of those “omg, wtf JavaScript?!” posts that make the rounds. Well… you’d still probably have the callback hell posts of like 100 nested callbacks, but you can do that in any language, that’s not really a JS problem, so much as a callback based API problem.
As is with software developing, actually writing the stuff down is the easiest part of the work. If you already have someone fact checking and editing… why do you need AI to shit out crap just for the writing? It would be easier to gather the facts first, fact check them, then wrangle them through the AI if you don’t want to hire a writer (+ another pass for editing).
LLMs look like magic on a glance, but people thinking they are going to produce high quality content (or code for god’s sake) are delusional.
Yeah. I’m a programmer. Everyone has been telling me that I’m about to be out of a job any day now because the “AI” is coming for me. I’m really not worried. It’s way harder to correct bad code than it is to just throw it all away and start fresh, and I can’t even imagine how difficult it’s going to be to try to debug whatever garbage some “AI” has spewed out. If you employ a dozen programmers now, if you start using AI to generate your code you’re going to need two dozen programmers to debug and fix it’s output.
The promise with “AI” (more accurately machine learning, as this is not AI) as far as code is concerned is as a sort of smart copy and paste, where you can take a chunk of code and say “duplicate this but with these changes”, and then verify and tweak its output. As a smart refactoring tool it shows a lot of promise, but it’s not like you’re going to sit down and go “write me an app” and suddenly it’s done. Well, unless you want Hello World, and even then I’m sure it would find a way to introduce a bug or two.
“Greetings planet”
D’oh!
Yep, I’ve had plenty of discussion about this on here before. Which was a total waste of time, as idiots don’t listen to facts. They also just keep moving the goal posts.
One developer was like they use AI to do their job all the time, so I asked them how that works. Yeah, they “just” have to throw 20% of the garbage away that’s obviously wrong when writing small scripts, then it’s great!
Or another one who said AI is the only way for them to write code, because their main issue is getting the syntax right (dyslexic). When I told them that the syntax and actually writing the code is the easiest part of my job they shot back that they don’t care, they are going to continue “looking like a miracle worker” due to having AI spit out their scripts…
And yet another one that discussed at length how you obviously can’t magically expect AI to put the right things out. So we went to the topic of code reviews and I tried to tell them: Give a real developer 1000+ line pull requests (like the AI might spit out) and there is a chance of a snowball in hell you’ll get bug free code despite reviews. So now they argued: Duh, you give the AI small bite sized Jira tickets to work on, so you can review it! And if the pull request is too long you tell the AI to make a shorter more human readable one! And then we’re back to square one: The senior developer reviewing the mess of code could just write it faster and more correct themselves.
It’s exhausting how little understanding there is about LLMs and their limitations. They produce a ton of seemingly high quality stuff, but it’s never 100% correct.
It seems to mostly be replacing work that is both repetitive and pointless. I have it writing my contract letters, ‘executive white papers’, and proposals.
The contract letters I can use without edit. The white papers I need to usually redirect it, but the second or third output is good. The proposals it functionally does the job I’d have a co-op do… put stuff on paper so I can realize why it isn’t right, and then write to that. (For the ‘fluffy’ parts of engineering proposals, like the cover letters, I can also use it.)
Arguably this is comparing apples and oranges here. I agree with you that code reviews aren’t going to be useful for evaluating a big code dump with no context. But I’d also say that a significant amount of software in the world is either written with no code review process or a process that just has a human spitting out the big code dump with no context.
The AI hype is definitely hype, but there’s enough truth there to justify some of the hand-wringing. The guy who told you he only has to throw away the 20% of the code that’s useless is still getting 100% of his work done with maybe 40% of the effort (i.e., very little effort to generate the first AI cut, 20% to figure out the stupid stuff, 20% to fix it). That’s a big enough impact to have significant ripples.
Might not matter. It seems like the way it’s going to go in the short term is that paranoia and economic populism are going to kill the whole thing anyway. We’re just going to effectively make it illegal to train on data. I think that’s both a mistake and a gross misrepresentation of things like copyright, but it seems like the way we’re headed.
That’s not totally true. Even if a developer throws a massive pull request dump at you, there is a high chance the dev at least ran the program locally and tried it out (at least the happy path).
With AI the code might not even compile. Or it looks good at first glance, but has a disastrous bug in the logic (that is extremely easy to overlook).
As with most code: Writing it takes me maybe 10% of the time, if even that. The main problem is finding the right solution, covering edge cases and so on. And then you spend 190% of the time trying to fix a sneaky bug that got into the code, just because someone didn’t think of a certain case or didn’t pay attention. If AI throws 99% correct code at me it would probably take me longer to properly fix it than to just write it myself from scratch.
People have been saying programming would become redundant since the first 4GL languages came out in the 1980s.
Maybe it’ll actually happen some day… but I see no sign of it so far.
Yep, had this argument a bunch. Conversation basically goes:
Devil’s advocate though. With things like 4GLs, it was still all on the human to come up with the detailed spec. Best case scenario was that you work very hard, write a lot of things down, generate the code, see that it didn’t work and then ???. That “???” at the end was you as the programmer sitting alone in a room trying to figure out what a non-responsive black box might wanted you to have said instead.
It’s qualitatively different if you can just talk to the black box as though it were a programmer. It’s less of a black box at that point. It understands your language, and it understands the code. So you can start with the spec, but when something inevitably doesn’t work, the “???” step doesn’t just come back to you figuring out with no help what you did wrong. You can ask it questions and make suggestions. You can run experiments. Today’s LLMs hit the wall pretty quick there, and maybe they always will. There’s certainly the viewpoint that “all they do is model text and they can’t really learn anything”.
I think that’s fundamentally wrong. I’m a pretty solid programmer. I have a PhD in Computer Science, and I’ve worked as a software engineer and an architect throughout a pretty long career. And everything I’ve ever learned has basically been through language. Through reading, writing, speaking, and listening to English and a few other languages. I think that to say that I can learn what I’ve learned, but it’s fundamentally impossible for a robot to learn it is to resort to mysticism. At some point, we will have AIs that can do what I do today. I think that’s inevitable.
Well, that particular conversation typically happens in relation to something like a business rules engine, or sometimes one of those drag and drop visual programming languages which everyone always touts as letting you get rid of programmers (but in reality just limits you to a really hard to work with programming language), but there is a lot of overlap with the current LLM based hype.
If we ever do get an actual AI, then yes, AI will probably end up writing most of the programs, although it’s possible programmers will still exist in some capacity maybe for the purpose of creating flow charts or something to hand to the AIs. But we’re a long way off from a true AI, so everyone acting like it’s going to happen any day now is as laughable as everyone promising cold fusion was going to happen any day now back in the 70s. Ironically I think we are more likely to see a workable cold fusion before we see true AI, some of the hot fusion experiments happening lately are very promising.
Fix its* output.
See, this is why I work mostly in Java and Rust and not English. I’ve got those down, but English is WAY harder. Who even came up with this language, it’s a complete mess, glad they’re not making programming languages… or maybe they are, quick see if English and JavaScript share any devs!
You should get an AI to write English for you!
On a side note, I have used AI to help my programming, with some success. Smaller snippets and scripts (1-2 pages) is usually okay, but bigger than that is a big no no. Also, very nice for writing unit tests.
Haha, I know you’re mostly joking, but that comment about “English” creators not making programming languages is golden. Especially because most programming languages use keywords in English :)
Yeah, it was mostly meant as a joke since English doesn’t really have a creator (or at least not one alive today), but it evolved over a very long period. In terms of spelling there’s been some notable contributors, but in general it’s sort of a group effort. Then there’s JavaScript, which isn’t actually that bad with the exception of its very confusing scoping and type coercion rules. The scoping thing is really just a side effect of mixing OO and Functional paradigms together, and the type coercion while well-intentioned, is terribly implemented. If you removed type coercion from JS, and the this keyword, you’d pretty much eliminate every single one of those “omg, wtf JavaScript?!” posts that make the rounds. Well… you’d still probably have the callback hell posts of like 100 nested callbacks, but you can do that in any language, that’s not really a JS problem, so much as a callback based API problem.
Make a note to never look at Applescript.
removed by mod
“making ai” these days isn’t so much programming as having access to millions of dollars worth of hardware
removed by mod