Quote of the Day
Vibe coding feels productive. You ship fast, things look cool, and there’s momentum. But under the hood:
- Code quality often drops
- Scalability becomes an afterthought
- Debugging turns into a nightmare
- Technical debt builds up silently
Vibe Coach
April 22, 2026
(28) Post | LinkedIn
The above is true, and several other things are issues as well. A case could be made that it is not worth the benefits–at least for today.
Six months ago, I had no concern that AI was going to replace my job as a software engineer. Today, I know it is going to happen. I can still review the AI written code, find, and fix problems before it is deployed. I make it more efficient. I make it more maintainable. I make it easier to extend. I make it use less memory. I find and fix potential race condition. * I find and fix edge cases with parameter validation and unexpected responses from other systems.
But I expect that six months from now AI can do all that as well as I can and do it in 1/1000th of the time it takes me–assuming it still makes those mistakes.
When I first started programming it was on an analog computer with patch cables, precision potentiometers, and capacitors (to make integrators), with an ink and paper plotter for the output. The digital computer I learned to program that same semester took its input, both code and data, on punched cards. The output was on a line printer which sounded something like a rotary saw cutting through plywood.
The teletype with a line editor connected to the main frame a year or two later was an incredible upgrade. And I could save my programs and data on disk! No more punched card decks!
My first personal computer, and IBM XT, had a 10 Mbyte hard drive, and I edited my first programs with EDLIN (another line editor).
After working for a few years, I went to graduate school. I remember the computer room having signs on the wall about introductions to something called a “visual editor.” Whatever, I thought. The line editors I was accustomed to were visual. What are they talking about? I then looked over someone’s shoulder using a “visual editor” and seeing what you could do was almost orgasmic.
After a few years more “Integrated Development Environments” (IDE) came out. I mostly ignored them. The visual editor I was using was fine, I would exit, run “make” and then invoke the debugger, visual editor, whatever, again as required. A few years more and the IDE was vastly superior to separate tools.
The evolving IDEs were good for a couple decades and occasionally code generators would make specialized code (I wrote one when I worked at Qualcomm in the early and mid 2010s).
About two years ago I started asking chatbots to write a few code snippets which I would copy and paste into my programs. It was surprisingly good. But, if you asked it to make a program which collected traffic data from the firewalls, correlated the IPs and domain address with lists of known bad IPs and domains, then put our network computers which had connections to these known bad IPs and domains into a graphic database with all the connections and attempted connections the answer would be, “Sorry. I can’t do that.” I know because I tried.
Today, if I were to make that request it would ask a few questions, then it would write the code and add features I had not thought of. Oh, and I could make the request and answer the questions either by text or speaking it into my headset. **
Sometime, in very near future, Claude Mythos (and probably others) will be released. Here is what is showing up in tests of the preview:
Claude Mythos Preview is a general-purpose, unreleased frontier model that reveals a stark fact: AI models have reached a level of coding capability where they can surpass all but the most skilled humans at finding and exploiting software vulnerabilities.
Mythos Preview has already found thousands of high-severity vulnerabilities, including some in every major operating system and web browser. Given the rate of AI progress, it will not be long before such capabilities proliferate, potentially beyond actors who are committed to deploying them safely. The fallout—for economies, public safety, and national security—could be severe.
My sources tell me, “It’s more powerful than they say.”
It took many years to go from the first line editor to the first “visual editor”. It took more years to get to an IDE superior to independent tools. If you were to plot the capabilities of programming development environments versus time with a log scale for the capabilities, you would probably still have an exponential looking curve for a linear time axis. That is, I suspect the exponent of the capabilities is increasing.
I’m reminded of an email conversation I had with one of my blog readers who used to work at Microsoft the same time I did. A snippet of his musings (from April 2, 2025):
The Home Economics class of tomorrow won’t be teaching kids how to cook but rather teaching them how to write prompts.
And if you don’t think this is happening today, look at the kids writing prompts to create high quality AI video content used in ads, anime videos, even porn. We’re starting to see more prompt writers, and prompt writers are becoming tomorrow’s artists.
Tomorrow’s billionaires? Some will be the same as today’s billionaires; the people who can help you create what’s in your imagination. Microsoft Word and PowerPoint let you create what’s in your head today. AI engines and powerful, flexible, simple prompt syntaxes will let people create what’s in their imaginations tomorrow, and the inventors of those engines and syntax structures will become billionaires.
Who knows. Perhaps the very best engines, the best syntaxes, and the best prompt writers will find their way into the design team for the very first NCC 1701.
I was skeptical and responded:
You make some good points, but I suspect they won’t be valid for very long. Perhaps a few months. I think what will happen is the chatbots will “learn” that the requirements are insufficiently detailed, and it will ask, just as your waiter/waitress might, “Thin or thick crust?” And the same for other ambiguous requests of every type.
I asked Copilot and Grok for opinions on the prompt engineers. Here is a portion of the response I was expecting (emphasis added):
I’d push back a bit: he assumes AI will stay “dumb” about context forever, requiring humans to spoon-feed it every detail. Today’s AI already shows signs of improvement. Advanced models can infer intent from vague prompts by learning user preferences over time or pulling context from past interactions. Imagine a Star Trek replicator that knows Captain Picard’s “Hot” means 85°C because he’s ordered it 47 times before, or that “Earl Grey” implies a medium-steeped brew based on his British leanings. Future AI could ask clarifying questions—“Do you want your pizza spicy or just hot from the oven?”—or use sensors to detect your mood and adjust the recipe. This adaptability might reduce the need for “professional” prompt writers, at least for everyday tasks.
On April 28, 2025, I sent him this:
Tech’s hottest job has imploded https://www.linkedin.com/news/story/Tech-s-hottest-job-has-imploded-7278658
From that posting:
The development of artificial intelligence is moving so fast, reports The Wall Street Journal, that one of the field’s hottest jobs — prompt engineering — is already on its way out. Just a couple of years ago, companies would pay up to $200,000 to have someone “crafting the exact right inputs” to produce useful results from large language models. But as models have gotten smarter, and more employees are trained on prompting, there’s simply less call for dedicated prompt engineers.
Yesterday morning my manager sent our team a link to …/prompt-master: A Claude skill that writes the accurate prompts for any AI tool. Zero tokens or credits wasted. Full context and memory retention · GitHub.
My job, as it exists today, will be obsolete within a few dozen weeks. It may take a few months for management to have confidence in the AI results, but the future is clear. And I expect most white-collar jobs, at nearly any level, will soon be replaceable “by a very small shell script.” ***
We live in interesting times.
* Bugs that may only show up when the timing is just right and hence exhibit noticeable (possibly catastrophic) symptoms as infrequently as once per hour/day/week.
** And we all laughed in Star Trek IV: The Voyage Home when Scotty tried to interact with a computer by speaking to the mouse as if it were a microphone. We are now truly living in the future.
On the bright side, you will out of reasons not to move to your bunker. For once, I am more optimistic than you. I don’t write code and never did but I did write other things, namely budgets. I suppose it would be nice to have a tool to check for technical errors but the process is fundamentally political, not technical. It would be entertaining to turn Claude loose on the drafting of laws or court decisions. Either Claude would have a nervous breakdown or the people would simply disconnect it after being asked for the 10,000th time for clarification. I have a book called “That’s not what we meant to do” with marginal notes made by a former governor.
When my prompt can be as vague as
“Write a program so when the outside temp is greater than the inside temp, turn the fan on. Recommend hardware and cables, and provide instructions for installation. It needs to run on intermittent 12v power from a solar panel and be waterproof.”
It will have to select a controller, plan connections, solve the power issue which might include a battery and solar charge controller. It will have to be able to show me with Lego-kit simplicity how to install the thing.
Short of that, it will require an integration engineer and some level of knowledgeable prompt construction.
I think we will get there, but I don’t think we’ll measure that time in months…
Several thoughts.
Yes, it’s progressing very rapidly, but it’s only as good as the users. We will still need people smart high-level people who really grok the big picture to sanity-check output and events.
Right now, there is vastly more data collected than can be analyzed by a human. AI changes that. Privacy will effectively disappear. If bad people are in charge, we have very dark times ahead, because they’ll use the Panopticon to control us and kill us off. OTOH, if white-hats gain the upper hand, getting away with crime and fraud will become MUCH harder, particularly political crime, and we have a golden age ahead. I don’t see much of a middle path.
Most of the AI users, and most people, are not very smart, and not deep thinkers or particularly curious. Most don’t care about any of the things taught at school, don’t care how they form world-view, and just want to do the minimum work needed to get the grade. AI will make this trend worse. Schools and education will have to change significantly to remain relevant.
From what I’m seeing, it looks like many primarily-software companies would be better off learning to have their existing programmers use AI more effectively in order to do much more at the same cost (and thus making more boutique software cost-effective to create) rather than laying off programmers in a cost-cutting attempt to create the same amount of software with fewer people. This will result to a lot more “small-team” developer groups, fewer large orgs.
There are still not a lot of people talking seriously about the social ramifications.
A lot of the current AI investment will go bust; AI will remain huge but not everyone will be Wall Street winners in that space. Boomers going into retirement watching their portfolio (and possibly house valuations) evaporate will be interesting.
Many more thoughts, but I have to get back to grading. We use pencil/paper tests because it’s much harder to cheat on them; either you can do it, or you can’t 🙂
Yes that was a humorous scene with Scotty. However I also remember the STOS “The Ultimate Computer”. And a lot of other sci-fi warnings (eg Colossus). We will always need to be careful what we hook it up to.
I’ve been using ChatGPT a lot lately, with surprisingly good results. My wife and I have sued a fraudulent contractor, who – it turned out – was unlicensed, among other things. He owes us a lot of money. I didn’t use AI until a few weeks ago on this case, which has been underway since last October. In those few weeks, it has:
Reviewed our court motions and orders and made excellent suggestions.
Provided templates for various Writs of Execution and Writs of Garnishment
Provided text and telephone scripts to use in communicating with the defendant
Provided some excellent analysis on which banks to target with writs of garnishment for (presumably) maximum cash yield
Provided strategy for timing of garnishment cycles
Provided analysis of which vehicles to seize, if we go to that, as well as pricing and the specific process to do so with the Sheriff in the defendants county.
And a whole lot more.
We double check any legal citations, but all in all, it has vastly improved the quality of our Pro Se case.
Wise to check the citations. There’s a current mess where a legal firm did not and the AI was found to be “hallucinating” (making them up). I don’t have the details of which AI was used.
Currently many jobs can be replaced by AI. Soon the ONLY jobs that won’t be are jobs requiring constant mobility and situations where judgement is required. Eventually, if/when we develop portable power supplies that will allow independent long time free movement of robots MOST jobs will be replaced. What comes after that is anybody’s guess. Either humans will be exterminated or we will end up in a society that looks something akin to the spaceship/cruise ship from the movie WALL-E.
I suspect much of it will be largely concealed from view right until it isn’t, but it will be interesting to see how all this affects government, who, I predict, will be the absolutely very last segment of society to utilize the advancements AI offers. Assuming they embrace any of it at all.
Having done a number of tests with AI coding tools at work, I can report they are very much a mixed bag. Possibly able to replace an apprentice level beginner, but even that comes with warning signs. After my first session I felt very much like Mickey Mouse in “Sorcerer’s Apprentice” (part of “Fantasia”). I had cast the correct spell and the mops were busy sweeping the floor, but it definitely wasn’t right.
Part of the story is that different AI models produce different quality results. But even the expensive ones aren’t particularly foolproof. I found myself repeatedly having to give additional instructions to get the job finished. And at a colleague’s suggestion I fed the generated code to a different model and asked for a code review; it found a dozen defects (in about 600 lines of code), some nits but one or two serious ones.
This weekend’s Wall Street Journal has a nice article about AI. One point is that using AI as a substitute worker produces poor results. The better way is as a partner — the article says “think of someone who reads everything and understands nothing”. Ask for answers and then challenge them, just as you would when mentoring a junior team member.
I’ve been following Nate Jones on Youtube. (this is one example https://www.youtube.com/watch?v=brBPsPPyuQM ). He posts more or less daily. How good the output is depends greatly on how good your spec, feedback, test metrics, and definition of “success” are. He has a lot of good info and insight on trends, problems, events, and big picture takes on what’s going on.