SELECT LANGUAGE BELOW

“Learning to code” is outdated. “Talking to code” is set to dominate the future.

"Learning to code" is outdated. "Talking to code" is set to dominate the future.

When new technology emerges, we often try to force it into existing frameworks. Programmers code, installers run scripts, and researchers catalog documents for future retrieval. These behaviors seem intuitive. But eventually, circumstances evolve, revealing that these habits can be accidental and not as durable as they might have seemed. They’re just part of the arrangements we come to accept.

Now, with the onset of coding agents powered by large language models (LLMs), we see new patterns that can seem overly simplistic initially: just describe what you want in plain English, and voilà, it’s created.

A notable observer, Andrej Karpathy, who has extensive experience with neural networks, developed an interesting application called Menu Gen. This tool allows you to photograph a restaurant menu and generates images that explain each dish. It’s not only functional but also culturally relevant and genuinely helpful. What stands out is the method behind it. Interestingly, Karpathy didn’t actually code the application; he merely articulated the concept, and the LLM took care of creating both the front and back ends. It’s a curious admission, but he accepts that he doesn’t have a deep understanding of how MenuGen operates in the traditional sense.

In essence, the AI now acts as his programmer.

Cyborg Language

There’s a tendency to view this development as just a novelty, another spectacle in the ongoing tech showcase of Silicon Valley. However, the shifts happening here are foundational. The gap between conceiving an idea and bringing it to life is narrowing drastically. Karpathy suggests we might soon reach a point where anyone can launch an AI-driven application just as effortlessly as sharing a video on social media.

Consider installation scripts; they are unobtrusive enough that they can easily be made public. For example, Mintlify, a documentation company, proposed that by 2026, their software should ship with an English-language file named install.md. This human-readable checklist will allow coding agents to read and execute each step while pausing for user approval along the way. This method is more traceable than conventional Bash scripts. There’s also a skill.md file that outlines both installation and usage, accessible via a familiar URL for coding agents. It’s interesting to note that developers currently provide documentation aimed at machines, yet it’s easily understandable and modifiable for humans.

Karpathy highlights that digital services can adapt to be LLM-friendly, incorporating Markdown documentation, command-line interfaces in plain English, and accessible APIs. This convergence effectively merges the user and the AI into a single audience. If you design for one, you can naturally design for the other.

LLMs tackle knowledge management in ways that Vannevar Bush could hardly envision back in 1945. Bush dreamed of a personal file system called Memex, capable of recording all your books and correspondence—a mechanized memory allowing for quick and flexible retrieval. His ideas foreshadowed what we now recognize as a knowledge base, but LLMs democratize access to this capability without needing complex indexing.

Karpathy’s strategy in personal knowledge management involves inputting raw sources into LLMs. The LLM then constructs a linked wiki of Markdown documents featuring summaries, encyclopedic articles, and interconnections between concepts. The AI proactively handles the integrity of the content by identifying inconsistencies and gaps, acting like a dedicated research librarian. Any assertion made can be corroborated by someone reviewing the file. Instead of navigating the complexities of databases, it’s all laid out with straightforward language.

Great Speed Requires Great Care

This approach isn’t infallible, though. LLMs can confidently produce errors, blend facts, or make leaps based on scant evidence. What’s documented on the wiki might seem convincing before it’s confirmed accurate. The systems in place for error-checking are still evolving, presenting significant challenges. History tells us that technology often advances too quickly, leading to misplaced trust in unproven systems.

Nevertheless, the path is apparent. We are transitioning from an age where computers demanded precise instructions to one where they welcome intent. Assembly languages made way for high-level programming, which led to graphical user interfaces. Now, those are giving way to conversational interactions. Each shift has lowered the barriers to software development, and each incremental change carries unique costs and complexities that will take time to fully understand.

What sets this transition apart, though, is the medium itself. Previous shifts might have expanded computational access while keeping the language as a mere tool—whether that meant technical instructions, button labels, or code comments. Now, language takes center stage as the primary interface. Programs are essentially text, installation scripts are paragraphs, and applications transform into descriptions that are executable by a model capable of processing more written content than a human could in countless lifetimes. While it’s long been understood that words can create worlds, the structures we build from them have shifted in fundamental ways. They no longer merely mirror human behavior, which is a notable change.

Facebook
Twitter
LinkedIn
Reddit
Telegram
WhatsApp

Related News