• ThePowerOfGeek@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    1
    ·
    edit-2
    4 days ago

    I’ve said it before, but I’ll say it again: this sounds like complete bullshit - at least for now. Having played with multiple generative models for code generation myself, 4 times out of 5 there are profound problems with the code they spit out. And sometimes it’s complete crap.

    Sure, you can refine your prompts to improve the quality. But at that point it’s usually quicker, easier, and more accurate to just write the code yourself.

    And ‘vibe coding’ sounds like conceptual vaporware. Unless you feed a shit load of often proprietary data into the LLM, chances are it’s will not be able to capture enough of your business rules. And as result, the code it outputs is deeply flawed. And I don’t really see a way around that, at least while there are experienced developers around who can bridge the gap better than AIs can.

    ETA:

    There was a point in the late 1970s to early '80s when many people thought people required programming skills to use a computer effectively because there were very few pre-built applications for all the various computer platforms available. School systems worldwide made educational computer literacy efforts to teach people to code.

    Before too long, people made useful software applications that let non-coders utilize computers easily—no programming required. Even so, programmers didn’t disappear—instead, they used applications to create better and more complex programs. Perhaps that will also happen with AI coding tools.

    This is an interesting analogy. I’m not sure the two concepts (using a computer vs creating software on a computer) are as close as the author thinks. And I still think this underestimates the importance of accurately implementing proprietary business rules accurately. But they might be on to something. Maybe.