Skip to content

Where LLMs Shine

Published: at 07:39 AM

Table Of Contents

Open Table Of Contents

The “Easy to Verify, Hard to Generate” Sweet Spot

LLMs excel when the information they produce is hard to generate from scratch but easy to verify for correctness. This is the core of the ease of verifiability vs. generation ratio.

Consider these examples:

In these cases, LLMs provide a solid starting point or draft that humans can then quickly review and refine.

The Speed of Trust

I don’t quite remember where I read this, but ‘innovation happens at the speed of trust.’ I think this is especially true today with LLMs.

For LLMs to be truly useful, users must trust the output they produce. If every piece of output information needs meticulous scrutiny due to a lack of trust, the efficiency gains diminish. LLMs may be able to gain much more autonomy if they gain a strong foundation of trust.

Ken Thompson’s seminal 1984 paper “Reflections on Trusting Trust” offers profound insight. Thompson demonstrated how even with full source code access, we cannot truly verify a system’s trustworthiness, a compiler could secretly insert malicious code that perpetuates itself invisibly across generations.

This principle extends directly to AI systems. Today, we cannot fully audit an LLM’s training process, data, or decision-making pathways. I think AI enterprises have a long way to go on ‘trust’ before they can be trusted to do truly impactful work.

Final Thoughts

The most successful AI applications may operate in this sweet spot: hard to generate, easy to verify, and deployed with appropriate trust boundaries. This is where human judgment and LLM capability may create their most powerful synergy.