Skip to content

Some of My Issues With AI, LLMs and Their Adoption in Programming

Published: at 05:55 AM

Table Of Contents

Open Table Of Contents

AI-Generated Code and ‘Forced’ Adoption

A substantial portion of code today is being drafted by AI. While critical architectural and design decisions remain human territory, the sheer volume of AI-generated code that gets only a quick look from human developers presents real risk. It’s not uncommon for subtle bugs, performance bottlenecks, or security vulnerabilities to slip through.

The real danger isn’t that LLMs write buggy code (though they do). It’s that enterprise pressure to “leverage AI” is pushing us into a workflow where complex logic gets generated by machines and rubber-stamped by humans who are increasingly disconnected from the reasoning process.

This gets worse when you add the push from management to aggressively integrate AI tools, sometimes into use cases where the benefits are marginal or don’t exist at all. This has led me to realize that AI is sometimes a solution in search of a problem, often serving as little more than a marketing and sales gimmick in various industries right now (sometimes causing more problems than any use, consider the poorly implemented LG ‘AI’ Washing Machine).

The Shift to Declarative Programming and LLMs as a Computing Paradigm

One fascinating, if somewhat concerning, consequence of LLM proliferation is how it’s accelerating the move toward more declarative programming styles. Rather than carefully crafting step-by-step instructions for a machine, we’re increasingly defining what we want and letting the LLM figure out how to do it.

This gets even more pronounced with ‘agentic’ AI.

Consider a simple data transformation task. Instead of writing a Python script to parse a CSV, filter rows, and format output, you might just prompt an LLM:

“As a data processor, your task is to extract all entries from the provided CSV where the ‘Status’ column is ‘Active’, then output the ‘Name’ and ‘Email’ columns as a JSON array. If the input is malformed, describe the issue and exit with an error.”

Here, the LLM itself becomes the computing engine, and your prompt serves as the input arguments. This approach is fast and simple, but it completely abstracts away the underlying computational logic.

While this works for simple tasks, I feel deeply uncomfortable with its black-box nature. Whether it won’t do anything bad or unexpected is largely left to the mercy at the application-level, like in code or MCP implementations. This feels like we’re committing to uncharted territory before we’ve even looked around.

Case in point: Simon Willison’s recent post about how Supabase seems to struggle with safeguarding databases from prompt injections.

Foundational Learning, LLMs, and the Joy of Programming

In my experience, LLMs have this weird ability to make you feel like you’re learning without actually understanding what you’re learning. I’m writing a ‘focus mode’ app for Linux that blocks certain programs from launching when it’s active. This was my first time programming in Rust, and I used Claude to speed things up. I knew what I wanted to accomplish and how to approach it, just not in Rust specifically.

What I noticed is that I kept asking Claude the same basic questions over and over: syntax for loops, conditionals, and so on. I realized that while LLMs can make experienced programmers much more efficient, they might actually make novice programmers worse. I kept missing typical Rust idioms and patterns.

When developers lean too heavily on AI to auto-complete their code, they risk losing the muscle memory and deeper problem-solving skills that come from wrestling with code line by line. It’s like a senior developer moving into management - their hands-on coding skills naturally fade without constant practice. For those of us who love the actual work of programming, the creative challenge, and the satisfaction of crafting elegant solutions, this shift feels like losing something important.

The most insidious part is that this degradation feels productive. You’re shipping features faster, solving problems quicker, getting results more efficiently. But you’re also becoming dependent on a computational oracle that you don’t understand, can’t debug, and can’t replicate when it breaks.

The Environmental Footprint

The high environmental cost of LLMs is completely at odds with companies trying to force them on everyone. Why do I need an ‘AI overview’ for a simple Google search I’ve been doing for years? I see five different buttons trying to get me to use Microsoft Copilot to summarize an auto-generated email. Every time I click a cell in Excel, the Copilot button follows my cursor to the cell. I constantly see people using LLM chatbots to research things that would be much better served by a basic web search.

We’re living in a world where generating a few lines of code burns more energy than running that code for years. The math is brutal, and we’re not talking about it because the environmental costs get pushed onto someone else while the productivity benefits feel immediate and personal.