How I Got Everything Wrong About Coding
Agents and More

by Ronald Yu (A former Meta employee, now working in an AI startup.)
	This article was first published on substack.
	

Materials available at: http://forejune.co/cuda/

AI Coding Agents

February 2026 has been an inflection point for AI coding agents. At this point coding agents with existing tool-calling and reasoning capabilities are already strong enough to make a majority of software engineering jobs obsolete, and they are only getting stronger.

In light of these developments, this post revisits over-trodden questions about the implications of AI on society and work, but first we must take a long detour along my personal epistemological journey and discuss how I completely bungled my predictions about the capabilities of these reasoning models.

How did I get it so wrong?

I wrote several posts around January 2025 that were implicitly or explicitly bearish on the future of reasoning models, and I believe that view to basically be around 90% wrong at this point. How was I so blind-sided by the success of AI coding agents, especially given how optimistic so many people around me were?

The reasoning behind my bearishness last year was that AI bulls were blindly drawing an exponential curve between points in situations where it didn't make sense (e.g. a line from GPT 4 to O-3) and extrapolating to declare AGI imminent. Since I did not believe it made sense to blindly extrapolate an exponential that I did not believe existed, I did not buy into the hype. However, I did not realize that regardless of whether it belonged on an exponential curve or not, reinforcement learning-style reasoning was powerful enough to bring coding agents to their current capabilities.

Zooming out a bit to a more epistemological discussion, over the past year or so, I have come to realize that out of my predictions about the future about society and my personal life, my biggest errors have all stemmed from being too pessimistic about the likelihood of success. In the case of coding agents, I thought test-time reasoning and tool-calling were lipstick on a pig for what felt like a pretty weak model.

My Pessimism-bias

In addition to coding agents, other examples of my pessimism-bias include the following. I thought San Francisco was doomed post-pandemic because of its local governance choices. During Meta's stock rise from ~$100 to ~$700, I thought it was overpriced when it hit $300, $400, $500, and $600, because of how dysfunctional things seemed internally. I thought certain people of various degrees personally removed from me would flop due to fatal personality flaws (e.g. being narcissistic, being grifty, being selfish, having a tendency to BS, having really dumb ideas), but they have objectively had strong career success.

Takeaways of my errors

I think there are two opposite takeaways from my errors. First, some people say I'm a contrarian, and in the age of endless AI hype, being pessimistic was somewhat of a contrarian belief. But at the end of the day, contrarians are just people who believe they have alpha that the rest of the world does not, and it is easy to fool yourself into thinking you see ``alpha" by simply making things up to criticize and nitpicking at flaws. If only the public knew how dumb coding models were or how disorganized Meta was or what a charlatan this person was, they would come to the same pessimistic conclusion as me. But alpha is only alpha if it is accurate, which is too often not the case for me. I think the next time I feel like being a contrarian, I should show some humility and ask myself, ``am I being contrarian because I have alpha, or because I wish I had alpha?"

Second, I think my prior about what success looks like is too closed. I think my prior over the last few years has been that success is difficult to come by, and therefore everything needs to break perfectly for things to work out. If you are able to find critical flaws in a person or organization, you can predict their downfall.

In research, there's an analogy that a paper can be a cockroach or a three-eyed dog. A three-eyed dog looks lovely on the surface, but then as a reviewer you notice something a bit off, and once you pinpoint its fatal flaws you reject the paper. On the other hand, a cockroach might look ugly at first glance, but no matter what you do to try to kill it, it simply won't die, and as a reviewer you have no choice but to accept the paper. As an author, you want your paper to be like a cockroach -- it might not be pretty, but at least it is rigorous and has no critical weaknesses.

I think that success in life is actually more about being a three-eyed dog. Research papers are relatively small in scope, and therefore any flaw can turn into a fatal flaw. But in a complex arena where success is so difficult to come by that everyone has flaws, then there is no such thing as a fatal flaw. Success is driven by your positive qualities rather than your flaws. Based on this conclusion, I should take the following actions and mindset changes:

  1. Be more optimistic and less negative (duh).

  2. Be more open-minded about collaborators in life and try to make more mutually-beneficial trades and interactions, even if I think the other party is rife with what I believe to be fatal flaws.

  3. Unless you are a professional trader or forecaster, you can be very wrong about a lot of things and still be very successful. Hence, when forming a worldview, I should optimize for a worldview that maximally biases me towards action while remaining generally accurate rather than the other way around.

  4. Perhaps a downstream effect of this conclusion is that I should stop trying to trade or make money based on my worldview, which is not optimized for maximum truth-seeking.

As a final addendum, in this particular instance (the rise of AI), I think additional possible sources of this bias of mine included general negativity bias, envy of others' success, a bias against wanting to seem naive and living in la-la land, a subconscious desire for things to stay the same/disliking change, and some sort of ``YC/a16z-Derangement Syndrome" of wanting to take the opposite position of certain people who might be annoying in public. It is best to be mindful of these biases when forming future opinions.

Is the market in an AI-induced bubble?

Before February, I would have said possibly. Now, my answer is a hard no as I think there is no way that this level of productivity increase was already priced into the market. I think the temptation to call the market a bubble stems from people seeing VCs throwing dumb money at all types of clearly bad ventures, but similar to my earlier point, even if a bunch of dumb money is being thrown around, I don't see how the market can crash if the software department of every single S&P 500 company just increases an order of magnitude in productivity.

Will AI-induced layoffs come?

Yes. Sometimes I see pundits online busting out this chart or that statistic claiming that hiring slowdowns and layoffs are due to pandemic-era over-hiring rather than AI. I don't see how you can use these AI tools and watch them replace the most cognitively demanding parts of your job and vastly increase your productivity and think that businesses will not adjust their hiring goals in response. Perhaps Jevon's Paradox kicks in eventually and there are more software jobs at an industry-wide level, but in the short to mid term, for many businesses with a fixed or slowly growing set of software needs, AI-induced layoffs of software engineers seems like an inevitability to me.

How do software engineers spend their time now?

Software engineering for me these days is mostly just coming up with a desired verifiable result (e.g. write code such that XXX command produces YYY output) and asking Claude to bang its head against the wall until it comes up with said result. And it usually just works. AI has earned a quite sticky reputation of hallucinating a lot, making many mistakes, and having limited utility, but I think those things are mostly no longer true about existing AI coding agents.

Almost everyone I know is now working longer hours due to the rise of AI coding tools. In my experience, how hard you work is often bottlenecked by how tired you are or how much energy you have spent rather than time, and with AI off-loading a lot of the mental burden of day-to-day software work, the amount of time people spend on work will increase.

How should software engineers spend their time now?

I've seen the argument that now that coding is far more productive and valuable than it used to be, you should adjust your workday priorities to be much more heavily tilted towards coding and even more aggressively reject meetings and other duties than before.

I'm also sympathetic to the argument that software engineers should shift off coding and start refining skills that won't be completely replaced by AI soon such as writing Google docs, blogging, podcasting, and becoming a people person. After all, when using these coding tools it feels like a near certainty that coding as we know it will basically be obsolete in the near future.

At this point, it feels like the end of coding is so imminent, that any serious head-start you try to get on compounding ``AI-proof" skills would be pretty trivial. Hence, for now I plan to follow the SF tech-bro strategy of just coding more and hoping that my increased productivity ends up making me financially well-off enough in the short term to handle losing my job in the future. I guess I will continue writing this blog though.