• SirDerpy@lemmy.world
    link
    fedilink
    arrow-up
    3
    arrow-down
    1
    ·
    3 months ago

    I feel like it will get to the point where AI will start writing code that works but nobody can understand or maintain including AI

    Already there, and have been for awhile. In my work we often don’t understand how the AI itself works. We independently test for accuracy. Then we begin trusting results without verification. But, at no time do we really understand the logic of how the AI gets from input to output.

    If you are able to explain the requirements to an AI so fully that the AI can do it correctly it would have taken shorter time to program by yourself.

    This makes sense for a one-time job. But, it doesn’t make sense when there’s a hundred jobs with only minor differences. For example, the AI writes a hundred AI’s. We kill all but the three to five best models.