• @dirtyrig
    link
    English
    61 year ago

    Thoughtful. I can relate to a lot of what you’ve written here. What are your thoughts on the rich getting richer? I really worry about wealth inequality.

    • @dawsoneliasenOP
      link
      English
      71 year ago

      I haven’t thought about this aspect of AI nearly as much, just because I spend a lot of time thinking about writing code and very little time thinking about macroeconomics. But I think you’re probably right to be worried, especially with the hype explosion caused by ChatGPT. I think ChatGPT is nigh-useless in a real practical sense, but it may work very well for the purpose of making OpenAI money. They’ve announced partnerships with huge consulting firms. It’s easy to imagine it as this engine of making money for corporations based mostly on its perceived amazing capabilities, without actually improving the world in any way. Consulting is already a cesspool of huge sums of capital that accomplishes nothing (I work in consulting). ChatGPT is perfect for accelerating that. It’s also worth pointing out how these latest models like GPT require ungodly amounts of data and compute—you have to be obscenely rich to create them.

      • @CanadaPlus
        link
        English
        3
        edit-2
        1 year ago

        I think ChatGPT is nigh-useless in a real practical sense

        That’s a very strong take. If you want something to write a short story or essay for you it’s mad useful. It also can take a fast food order pretty reliably off the shelf.

      • @kool_newt@beehaw.org
        link
        fedilink
        English
        1
        edit-2
        1 year ago

        I’m a programmer, ChatGPT is incredibly useful to me now. I use it like a search engine that is able to pull up the perfect example I can use to get me over hurdles. Even when it’s wrong it’s still useful like having a human tutor, that can also be wrong.

        Ultimately, I think this type of AI will have a similar effect as Google.

    • @CanadaPlus
      link
      English
      4
      edit-2
      1 year ago

      Not OP, but it’s a real concern, although at this rate of progression I wonder if AI ethics will have much of a splash before AI alignment becomes the question.

  • @zkxs
    link
    English
    4
    edit-2
    1 year ago

    Wow, this is just what I’ve been looking for without even realizing it. A lot of my friends who are newer to the world of programming are very excited by this new wave of generative AI, particularly ChatGPT and GitHub Copilot. Conversely, I personally have a lot of misgivings about AI programming sort of half-formed in my mind. I’ve been programming for a while now (although I’m sure relative to all the SDF veterans I’m still pretty new to the game) and I can’t bring myself to believe that prodding ChatGPT into a reasonable output is more efficient than just writing the code yourself… and then I start to worry that perhaps I’m biased. As they say, “It is difficult to get a man to understand something, when his salary depends upon his not understanding it”.

    Anyways, your headline alone is a better argument against the merits of AI programming than anything I was able to come up with, so going into it I knew the post would be a good read. And I wasn’t disappointed: you’ve provided me with a much better framework to discuss generative AI with folks moving forward. Thanks for writing this!

    • @dawsoneliasenOP
      link
      English
      41 year ago

      Hey thanks, I really appreciate it! This just made my day.

    • @Modal
      link
      English
      31 year ago

      I think of AI in programming the same way I think about search engines (there are a lot of parallels). It can be helpful when you’re stuck or learning something new, it can be wrong, and if you use it for everything you might get something that works but its not going to look like something someone with experience would have done.

    • @CanadaPlus
      link
      English
      21 year ago

      It’s true, but all programs start as natural language at least partly. Clients tell developers what they want, the developers then translate that into something that makes actual sense and is close enough to the request to make the clients happy.

      • EamonnMR
        link
        English
        11 year ago

        Indeed, it’s the job of the programmer to understand that natural language and use it to design a program. The lack of understanding is one thing that worries me about LLMs writing programs.

        • @CanadaPlus
          link
          English
          2
          edit-2
          1 year ago

          Like the article mentions, it’s only good at boilerplate code at the moment, and can’t really do architecture very well. I guess that’s why it’s “Github Copilot” and not “Github Pilot”.

          Going forward, who knows? We fundamentally don’t understand why LLMs work.

    • @kool_newt@beehaw.org
      link
      fedilink
      English
      11 year ago

      Think about using AI output as inspiration, examples, getting over writers block, etc, and less about using it to cut and paste it’s output wholesale as completed work.

  • gjost
    link
    English
    41 year ago

    I was able to prod ChatGPT into writing a Python function for computing the compass direction between two points on a 2D grid. It came up with something that worked, but I had to iterate many times and took about as long as it took to google the math when I wrote the function for myself.

    My programming career has been built on googling around to explore problems somebody asked me to solve, and then synthesizing the results I found into code. My first reaction was that ChatGPT might short-circuit that process. What would my career have been like if this had been available back then? I feel like all that googling over the years gave me a sense of problem spaces and a certain amount of domain knowledge, and I would have missed out on that with ChatGPT. On the other hand, it took knowledge to know whether its answer was correct…

    The other thing I thought was that during my career I’ve gone from hand-coded HTML to Perl CGI to Cold Fusion to PHP to web frameworks, and also from straight HTML to CSS to frameworks like Bootstrap. Each time I’ve fretted over not being involved in the layers below. Is ChatGPT just another layer?

    Of course, I have no clue about the browser internals, or about the OS, but I know that somebody does. At some level it’s a clockwork engine that can be picked at and understood. ChatGPT feels different, people don’t actually know its internals, and I worry that future generations of programmers will be generating code that they don’t understand, and maybe nobody will be able to understand.

  • @lloram239@feddit.de
    link
    fedilink
    English
    21 year ago

    The work was about 80% reading the API’s documentation, 18% configuring my API keys and downloading the example project and things like that, and 2% writing code to hook everything up.

    Or in other words 100% of the work could be automated by AI quite easily. The fun with ChatGPT and friends is that it doesn’t stop at programming, it can automate all the other stuff as well. It can read documentation, answer emails and produce shell code just fine. It’s also not like you are “programming in English”, you are having a conversation with an AI programmer. It can dynamically change the code and adopt to new requirements. Don’t like the Python code it put out, ask it to rewrite it in C++. Don’t like it using libbar, ask it to use libfoo instead. That’s the power of AI programming, it gives you a much higher level of abstraction than any classic programming tool. And it’s cross domain, if you need some plausible data for a test case, it can write that too. Want to replace your programmer art with something professional, done. Need an icon for your app, done. If you are just looking for a way to name a thing in your code, AI is the best thesaurus you could hope for. If you like, it can even review and criticize your requirements.

    And yes, ChatGPT, especially the free version, gets everything outside of trivial problems wrong. But the point about AI is not were we are now, but where we will be in 5-10 years. ChatGPT is just the first prototype that kind of works. It’s the AltaVista of AI, wait until we get to the Google of AI.

    Given the restrictions under which ChatGPT has to work (no access to documentation, everything done from memory, no ability to actually test code or use a compiler) it’s insane how good it already is.