cross-posted from: https://lemmy.zip/post/49954591
“No Duh,” say senior developers everywhere.
The article explains that vibe code often is close, but not quite, functional, requiring developers to go in and find where the problems are - resulting in a net slowdown of development rather than productivity gains.
Then there’s the issue of finding an agreed-upon way of tracking productivity gains, a glaring omission given the billions of dollars being invested in AI.
To Bain & Company, companies will need to fully commit themselves to realize the gains they’ve been promised.
“Fully commit” to see the light? That… sounds more like a kind of religion, not like critical or even rational thinking.
Probably by counting produced lines of code, regardless their correctness or maintainability.
And that’s probably combined with what John Ousterhout calls “Debugging a System into Existence”, which is, just assuming the newly generated code works until inevitably somebody comes with a bug report and then doing the absolute minimum to make that specific bug report go away, preferably by adding even more code.
It seems like a good way to actually determine productivity would be to make it competitive.
Have marathon and long-term coding competitions between 100% human coding, AI assisted, and 100% AI. Rate them on total time worked, mistakes, coverage, maintainability, extensibility, etc. and test the programmers for knowledge of their own code.
That what I thought. Each line of generated code even if deleted afterwards. Or have someone try to get as high as possible in a single trial