Ali Rohde

A Field Guide to Silicon Valley's Most Invasive Memes

A short handbook for understanding how Silicon Valley thinks

@RohdeAli|April 17, 2026

Tech is full of ideas that didn't originate here at all. They drifted in from psychology, philosophy, Cold War game theory, and stray economics papers, then spread like invasive species. In this ecosystem, a concept doesn't rise slowly; you hear it once in a meeting, and by the end of the week it's in every pitch deck, Slack channel, and product review you touch. Tech doesn't just adopt ideas; it absorbs them, compresses them, strips out the caveats, and blasts them through the system. What follows is a field guide to the most contagious ones: a map of the memes the industry runs on, and the surprising places they came from.

Affordances

Interface cues that signal how something should be used. Originating in ecological psychology and later formalized in HCI, they shape how we design agent and LLM interfaces, making it obvious what actions a model can take, what tools it can invoke, and how to signal intent without overwhelming the user.

Anti-Patterns

Tempting solutions that reliably cause bad outcomes. Originating in software engineering, the idea surfaces in AI when people try things like "just tell the model not to hallucinate," or when RLHF reinforces behaviors teams were actually trying to eliminate.

Bikeshedding

Over-focusing on trivial issues because the important ones are harder. From Parkinson's Law of Triviality, it shows up in endless debates over small prompt tweaks or logging formats instead of addressing core architecture, data quality, or evaluation rigor.

Capability Overhang

The gap between what a model can already do and what people have figured out how to get it to do. From AI safety and alignment discourse in the early 2020s, used to describe how a system's latent abilities can outpace the interfaces, prompts, and scaffolding built around it. Shows up when months of apparent "progress" on a benchmark turn out to come not from a new model but from better prompting, tool use, or agent design wrapped around a model that was already there.

Commoditize Your Complement

Drive down the price of things adjacent to your product so more value accrues to you. Coined by Joel Spolsky in 2002, rooted in the microeconomics of complements and substitutes. Shows up all over the AI stack: Amazon subsidizing satellite connectivity to pull value into AWS, frontier labs commoditizing voice and legal apps to drive usage of their models, and platforms opening up agent runtimes so the underlying model becomes the only thing worth paying for.

Default Alive

A startup is "default alive" when, without raising more money or cutting burn, it will eventually become profitable before running out of cash. Coined by Paul Graham, the idea spread because it offered a clear test of whether a company was building a real business or simply extending runway. In AI, teams use it to distinguish between products that can sustain compute-heavy operations on revenue versus those reliant on continuous fundraising.

Dogfooding

Using your own product internally before (or alongside) shipping it to customers, both to find bugs and to signal confidence. Coined at Microsoft in the 1980s as "eating your own dog food," possibly borrowed from an Alpo commercial. Shows up in Uber's internal Claude Code leaderboards, Apple sending Siri staffers to a coding bootcamp, and every AI lab that claims a high percentage of its own code is now model-written.

Dunning-Kruger Effect

Beginners overestimate their ability while experts underestimate theirs. From social psychology (1999), it explains why novices often assume AI is already superhuman while experts can be overly cautious after seeing edge cases up close.

Founder Mode

A cultural and cognitive shift where a founder operates with heightened urgency, sharp focus, and a bias toward action. The phrase exploded in 2024 after Brian Chesky spoke at a Y Combinator event about going into "founder mode" back in 2020, when COVID gutted Airbnb's business, he laid off nearly a quarter of the company, flattened management, rebuilt product velocity, and then took the company public. Paul Graham's subsequent essay Founder Mode crystallized the term and pushed it into wider circulation. Since then, it has spread like wildfire in the AI era, where the competitive landscape moves absurdly fast and "being in founder mode" has become shorthand for the operating cadence expected when you're shipping into a market that can change week to week.

Hand-Rolled

Custom-built instead of using standard tools. Borrowed from hand-rolled cigarettes and adopted by early programming culture, it describes much of today's AI infra: bespoke RAG stacks, evaluation harnesses, routing layers, and memory systems built before reliable primitives exist.

Hill-Climbing

Incremental improvement that risks getting stuck in a local optimum. From optimization and computer science, it mirrors gradient descent and expresses the concern that the field might over-optimize current architectures while overlooking radically better ones.

Jagged Frontier

AI models can be shockingly capable on some tasks and shockingly bad on others that look adjacent, with no smooth capability curve. Coined by Dell'Acqua, McAfee, Mollick and co-authors at Harvard and Wharton in 2023, based on GPT-4 knowledge-work studies. Shows up when a model autonomously finds a zero-day vulnerability but fumbles a simple spatial reasoning task, or when an agent handles an entire coding flow end-to-end but hallucinates on product SKUs.

Jevons Paradox

As a resource becomes cheaper or more efficient to use, total consumption of it goes up, not down, because the lower cost unlocks use cases that weren't viable before. Named for William Stanley Jevons, who noticed in 1865 that improvements in steam-engine efficiency led to more coal being burned overall, not less. Shows up as the standard retort in AI's cost-curve debate: as inference gets cheaper through smaller models, quantization, and better routing, total compute demand expands to fill the new headroom rather than plateauing.

Legibility

The extent to which a system can be seen, measured, and standardized from above. Coined by political scientist James C. Scott in Seeing Like a State (1998) to describe how states flatten messy local practices into uniform grids, cadastres, and surnames so populations can be taxed and governed. Shows up in AI as the push to make models, agents, and training data auditable: evals, traces, mech interp, and compliance frameworks all work by forcing opaque internal behavior into forms a regulator or deployer can actually see. The term has also gone mainstream in product and design circles as shorthand for making a system understandable to the people using it, usually stripped of Scott's original warning about what gets flattened in the process.

Occam's Razor

The simplest explanation is usually the best. From medieval philosophy, it guides engineering instincts: simpler systems are easier to debug, simpler architectures often generalize better, and most product problems stem from straightforward issues rather than elaborate strategic failures.

Overton Window

The range of ideas society considers acceptable. From political science, it explains how positions in AI (open weights, autonomous agents, extreme compute budgets) move from fringe to mainstream with surprising speed.

Power Law

A small number of outcomes dominate the whole distribution. From statistics and complex-systems theory, it underlies everything in tech and AI: a few models dominate benchmarks, a handful of companies capture most market traction, and a tiny fraction of agents are dramatically more reliable than the rest.

PRD (Product Requirements Document)

The written spec for what a product or feature should do, for whom, and under what constraints. Originating in hardware and enterprise software in the 1980s to force alignment between engineering, design, and business before work started, and later codified in Marty Cagan's product management canon. In AI, the term has drifted loose: founders and operators now use PRD in place of "memo" or "spec" for everything from one-page feature briefs to investment theses, often without the original rigor around scope, success criteria, and constraints.

Principal-Agent Problem

Misaligned incentives between the decision-maker and the doer. From economics, it doubles as a metaphor for alignment: the human is the principal and the model the agent, and the work is about creating incentives, constraints, and oversight mechanisms that keep the agent behaving as intended.

Progressive Disclosure

Revealing complexity only when the user needs it. From UX design, it drives modern AI interfaces where advanced controls (system prompts, routing logic, vector settings) stay hidden until users are ready for them.

Reality Distortion Field

A leader's charisma makes impossible goals feel inevitable. Coined in 1981 at Apple to describe Steve Jobs, it appears today around ambitious AI lab roadmaps and founders whose certainty accelerates teams beyond conventional timelines.

RLHF (Reinforcement Learning from Human Feedback)

A training method where humans provide preference data that guides reward models, which in turn steer LLM behavior. Originating at the intersection of RL and HCI, it's a major driver of model alignment, safety, and stylistic refinement, but also introduces failure modes when the reward signal is ambiguous or overly normative.

S-Curve

A pattern where technologies grow slowly, then rapidly, then plateau. From innovation theory, it's often used to frame AI capability growth: many believe we're in the steep middle of the curve, while the location or existence of the plateau remains uncertain.

Sandboxing

Isolating risky behavior in a controlled environment. From security engineering and game design, it's central to agent safety: constraining tool use, limiting code execution, and testing new model behaviors with minimal blast radius.

Schelling Point

A natural place people coordinate without communicating. From game theory, it explains why entire founder cohorts converge on the YC SAFE, why certain stacks become defaults, and why AI researchers gravitate toward shared norms and eval suites even without explicit coordination.

Schlep Blindness

The tendency to underestimate or ignore tedious, unglamorous work that actually determines success. Coined by Paul Graham, it resonated because every meaningful problem hides a pile of unpleasant operational tasks. In AI, it shows up when teams obsess over models but underinvest in eval pipelines, monitoring, data cleaning, or the infrastructure that makes systems reliable.

Systems Thinking

Understanding behavior by analyzing the entire interacting system. From cybernetics and organizational theory, it's essential for AI because models behave differently when embedded inside agents, tools, incentives, or multi-model ecosystems.

Type Two Fun

Miserable while it happens but meaningful in hindsight. From mountaineering culture, it's the unofficial description of long refactors, safety evaluations, and late-night deploys.

Yak Shaving

Getting stuck doing a chain of small tasks before you can start the real task. Originating at the MIT AI Lab, it's the daily reality of AI infra work: each dependency reveals ten more.

Zero-Shot / One-Shot / Few-Shot

Prompting with no examples, one example, or a small handful. From early LLM research, these terms became the shorthand for how models generalize, mimic style, and perform structured tasks without fine-tuning.

Closing

New ideas will keep pouring into this ecosystem as fast as old ones settle in. Some will flare briefly; others will become permanent fixtures of how people build and reason. That's the predictable rhythm of Silicon Valley's vocabulary: concepts surface, get stripped of preamble, and slip into everyday use almost overnight. The goal of a glossary like this isn't to capture every meme but to understand why they matter. They help builders coordinate, choose, and move: a shared language for navigating an accelerating world.