richard·keil

The Carpenter and AI

February 20, 2026 · 7 min read

[No AI was involved in the creation of this text.]

I think that within 6 to 12 months, my job as a software engineer will look very different.

Although everyone has witnessed the major changes LLM-based AI tools have introduced to engineering, the last few weeks have made me truly pause.

I work in a codebase maintained by ~10 engineers, which grew quickly over the last four years. About half a year ago, models were pretty good at building helpful scripts or tools from scratch, but often failed to make correct changes to the main codebase. They lacked context on “how we do things here,” and reviewing their output would often take longer than doing it yourself.

However, since the end of last year, models have become much more helpful, especially in our codebase. It suddenly felt like they could understand better how our software is structured, the intentions behind it, and where to look before starting on their task.

During all this time, I mostly saw them as a complementary tool that aided my development flow.

Then, about three weeks ago, while investigating some bug, I started experimenting with a few MCPs and skills — namely, to automatically fetch error traces, access the local DB, and read production logs. This tweak, providing a bit more context than just code, had a great effect. My agent would automatically query error logs, search the codebase to find the relevant parts, read surrounding logic to understand context and then write SQL queries for my local database to check its state. Based on all the information it acquired, it created a report including a timeline of what happened, corresponding with a proposed fix. I then used a previously created skill to automatically file a Linear bug ticket in our typical style.

The time since then has felt like a blur in terms of AI capabilities. By now, I’m rarely writing code myself; instead I’m mostly overseeing agents and reviewing their output, steering them in the right direction with my experience and taste. I sometimes feel like a wizard summoning the spirits to do my bidding, and quite enjoy my time conjuring up new features and bug fixes1.

That is, while feeling a bit uncertain about the future. As mentioned, my job does basically not involve plain coding anymore — it’s about planning, discussing with colleagues, “discussing” with agents, splitting tasks into smaller sub-problems, and eventually reviewing the code the tools generate. For local problems, agents often come up with better solutions than I could. I do learn a lot from that. For larger tasks, I have to nudge the agent in the right direction — by providing it with more context (DB access / logs / relevant files), or by providing it with taste. Which brings us closer to the title of this post.

Taste is something that I’m proud of. The ability to sense how clean certain code is, how understandable it is by others, and how robust and stable it is. But taste does not have intrinsic value in software engineering — it is a means to an end. Good taste will lead to better architectures and implementations, which in fact lead to better products, which means we can solve our original human problems better.

As AI labs are working to improve the capabilities of their agentic LLMs, they are aggressively post-training on more and more curated, high-quality software engineering tasks. They are infusing more and more taste into their models. Now, one might say that a bunch of matrix-multiplications could hardly qualify to have taste; however, what else is taste than having seen lots of ways to do things, and the effect of these things on their environment? And what else is post-training, really, than encoding these observations into weights and biases?

So where does that lead us? My taste will become less relevant. As models propose solutions aligned with the taste infused by their creators, working against them will take effort. You’ll have to be a coding activist to swim against the current instead of simply accepting what works, because you’ll be so much faster. So we’ll have to focus on providing value at the higher abstraction levels.

First, that will be architecture. Given we want the product to behave like X, how do we need to structure the code for that? I am already writing architecture documents with LLMs, but here my taste and personal research are (from my pre-occupied perspective) quite valuable. However, this higher-level taste will be equally consumed by the models at some point. They just need more long-term data to understand how architecture decisions affect software, or better ways to aggregate existing knowledge.

When we can no longer meaningfully contribute to architecture, the next abstraction level will be product. When technical decisions are fully taste-driven by LLMs, who decides what a product should look and behave like to solve our little human problems? The same thinking applies again – when models propose products that feel good to humans, that solve the requirements that we initially defined, who will make the effort to push their own opinion and taste?

One potential answer lies in IKEA. You can get an affordable table that perfectly does its job of being a table. It also looks quite elegant. There’s not much to complain about. Except maybe that IKEA tables don’t have a soul — put one in your apartment, and you make it a little bit more homogenous. This is what the commoditization of previously expensive things does. It broadens access, but removes soul. There is no carpenter putting their taste into the table to give it a soul; there is only IKEA that learned what a perfectly working and affordable table that lots of people want looks like.

I currently do feel a bit like the carpenter, watching IKEA gaining market share2. Software engineering is slowly getting commoditized, and so will be software products in general. For most people, this is a good thing. Few can afford hand-crafted products by the local carpenter, so going to IKEA is a viable option — especially if everything you get matches your requirements perfectly. This means lots of carpenters will have to move into other professions, as hand-crafted solutions need to be extraordinarily good to justify their price.

Now, my original draft of this post ended with me contemplating eventually moving to the countryside, disconnecting from AI, and instead writing little pieces of “handmade” software in some obscure language, which I can then share with my local community. Not the worst outcome, however, I think there is also a middle ground.

When the creation of software gets commoditized, we can use the freed-up resources to put more effort into creating good software — better understanding the domains we operate in, the users we do this for, and their problems we set out to solve. We can tackle bigger problems, always trying to be one step ahead of what can be fully automated away. Infusing our taste at exactly this spot at the forefront.

I’m not sure where all of this is leading us. For now, however, I’m excited to feel like a wizard a bit more.

Footnotes

  1. The metaphorical closeness to a certain ballad by Goethe does not go unnoticed. I leave potential conclusions to the reader.

  2. At the same time, I am an IKEA customer. I shamelessly vibe-coded and deployed a custom Splitwise alternative within an hour, because I did not want to pay for a Splitwise Pro subscription.

Got thoughts? Send me an email 📮
Streamlining Product Evaluation Workflows with Metaflow