A Field Guide to Silicon Valley's Most Invasive Memes
A short handbook for understanding how Silicon Valley thinks
@RohdeAli|April 17, 2026
Tech is full of ideas that didn't originate here at all. They drifted in from psychology, philosophy, Cold War game theory, and stray economics papers, then spread like invasive species. In this ecosystem, a concept doesn't rise slowly; you hear it once in a meeting, and by the end of the week it's in every pitch deck, Slack channel, and product review you touch. Tech doesn't just adopt ideas; it absorbs them, compresses them, strips out the caveats, and blasts them through the system. What follows is a field guide to the most contagious ones: a map of the memes the industry runs on, and the surprising places they came from.
Affordances
Interface cues that signal how something should be used. Originating in ecological psychology and later formalized in HCI, they shape how Claude Code surfaces its slash commands and @ file mentions so capabilities are discoverable without reading docs first, and how the "Tools" button in ChatGPT makes code execution and web search explicit options rather than wording-dependent tricks hidden in the prompt.
Anti-Patterns
Tempting solutions that reliably cause bad outcomes. Originating in software engineering, the idea surfaces when developers scrape a site that offers a clean API, when people try things like "just tell the model not to hallucinate," or when an agent's instructions file (a CLAUDE.md, an AGENTS.md, a .cursor/rules) grows so long the model stops reliably following any of it.
Bikeshedding
Over-focusing on trivial issues because the important ones are harder. From Parkinson's Law of Triviality, it shows up when an AI team burns a sprint debating which system-prompt phrasing is most polite, while skipping the harder conversation about whether their eval set reflects the queries real users actually send.
Capability Overhang
The gap between what a model can already do and what people have figured out how to get it to do. From AI safety and alignment discourse in the early 2020s, used to describe how a system's latent abilities can outpace the interfaces, prompts, and scaffolding built around it. Shows up most clearly in the SWE-bench trajectory, where scores climbed from the low single digits to over fifty percent on roughly the same underlying GPT-4-class model, because the agent scaffolding around it (Claude Code, Aider, Devin) got dramatically better while the weights stayed essentially the same.
Commoditize Your Complement
Drive down the price of things adjacent to your product so more value accrues to you. Coined by Joel Spolsky in 2002, rooted in the microeconomics of complements and substitutes. Shows up all over the AI stack: Amazon subsidizing satellite connectivity to pull value into AWS, frontier labs commoditizing voice and legal apps to drive usage of their models, and platforms opening up agent runtimes so the underlying model becomes the only thing worth paying for.
Default Alive
A startup is "default alive" when, without raising more money or cutting burn, it will eventually become profitable before running out of cash. Coined by Paul Graham, the idea spread because it offered a clear test of whether a company was building a real business or simply extending runway. Shows up sharply in AI: Cursor built a significant ARR business on top of OpenAI and Anthropic inference, so the default-alive question becomes whether its unit economics survive as token prices fall and competitors arbitrage the margin, versus the long tail of wrapper startups that cannot cover their inference bill even at peak pricing.
Dogfooding
Using your own product internally before (or alongside) shipping it to customers, both to find bugs and to signal confidence. Coined at Microsoft in the 1980s as "eating your own dog food," possibly borrowed from an Alpo commercial. Shows up in Uber's internal Claude Code leaderboards, Apple sending Siri staffers to a coding bootcamp, and every AI lab that claims a high percentage of its own code is now model-written.
Dunning-Kruger Effect
Beginners overestimate their ability while experts underestimate theirs. From social psychology (1999), it shows up when a first-time Claude Code user ships a landing page in an afternoon and concludes software engineering is basically over, while senior engineers who have watched the same tools fumble a race condition or hallucinate an API become over-skeptical about what these systems can actually do.
Entropy
The tendency of any system to drift toward disorder unless energy is continuously put back in. From thermodynamics and later information theory, the word has migrated into engineering shorthand for workflow decay. Shows up as the sleeper risk in every DIY stack: people who regret self-hosting their CRM or agent pipeline six months in rarely regret the schema, they regret the creep of "one more script," broken ingestion jobs, and integrations that silently rot.
First Principles Thinking
Reasoning from foundational truths instead of by analogy or convention. The phrase traces to Aristotle's Posterior Analytics in the 4th century BCE, where first principles are the irreducible starting points of any system of knowledge, and got pulled into modern tech discourse most loudly by Elon Musk explaining how SpaceX rebuilt the rocket cost stack from raw materials up. Shows up in AI when teams refuse to assume "the answer is a transformer" or "RAG is the right pattern" and instead ask what the problem actually requires before reaching for the default stack.
Founder Mode
A cultural and cognitive shift where a founder operates with heightened urgency, sharp focus, and a bias toward action. The phrase exploded in 2024 after Brian Chesky spoke at a Y Combinator event about going into "founder mode" back in 2020, when COVID gutted Airbnb's business, he laid off nearly a quarter of the company, flattened management, rebuilt product velocity, and then took the company public. Paul Graham's subsequent essay Founder Mode crystallized the term and pushed it into wider circulation. Since then, it has spread like wildfire in the AI era, where the competitive landscape moves absurdly fast and "being in founder mode" has become shorthand for the operating cadence expected when you're shipping into a market that can change week to week.
Fungible
Interchangeable with other units of the same kind. From Roman law and classical economics, where fungible goods like grain or currency can substitute for one another because any unit is effectively identical, unlike unique items such as art or land. Shows up in AI as the assumption under every pricing debate: tokens from one frontier model treated as roughly fungible with another, compute hours across clouds, and outputs compared in a benchmark. The reverse of the concept powered the NFT craze, where "non-fungible" was meant to guarantee digital uniqueness.
Hand-Rolled
Custom-built instead of using standard tools. Borrowed from hand-rolled cigarettes and adopted by early programming culture, it describes the bespoke RAG stacks every startup wrote in 2023 before vector databases stabilized, and the agent-orchestration code teams still maintain in-house while waiting on standards like MCP to harden into defaults. Often used interchangeably with "roll your own," as in: the only reason to roll your own is if you want a specific shape of tool the off-the-shelf version doesn't give you.
Hill-Climbing
Incremental improvement that risks getting stuck in a local optimum. From optimization and computer science, it mirrors gradient descent and surfaces in the scaling-laws debate: if bigger transformers keep inching benchmarks up, labs have every incentive to chase the next percentage point rather than invest in fundamentally new architectures, which critics argue is exactly what has dragged out the search for post-transformer approaches.
Idempotent
Something you can do many times and get the same result as doing it once. From mathematics, coined in 1870 by Benjamin Peirce in a paper on linear algebra, where an operation x is idempotent if x times x equals x. Shows up in AI tooling as a basic reliability property: a sync script where the same Obsidian file always updates the same Google Doc instead of creating a new one each run, agent actions that can be retried without corrupting state, and APIs designed so that hitting them twice does not accidentally charge the customer twice.
Jagged Frontier
AI models can be shockingly capable on some tasks and shockingly bad on others that look adjacent, with no smooth capability curve. Coined by Dell'Acqua, McAfee, Mollick and co-authors at Harvard and Wharton in 2023 based on GPT-4 knowledge-work studies, and reframed by Andrej Karpathy in 2024 as "jagged intelligence": AI is not on the same axis as human intelligence at all, since in people most skills are correlated and rise together while in AI they are uncorrelated across tasks. Shows up when a model autonomously finds a zero-day vulnerability but fumbles a simple spatial reasoning task, or when an agent handles an entire coding flow end-to-end but hallucinates on product SKUs.
Jevons Paradox
As a resource becomes cheaper or more efficient to use, total consumption of it goes up, not down, because the lower cost unlocks use cases that weren't viable before. Named for William Stanley Jevons, who noticed in 1865 that improvements in steam-engine efficiency led to more coal being burned overall, not less. Shows up as the standard retort in AI's cost-curve debate: as inference gets cheaper through smaller models, quantization, and better routing, total compute demand expands to fill the new headroom rather than plateauing.
Legibility
The extent to which a system can be seen, measured, and standardized from above. Coined by political scientist James C. Scott in Seeing Like a State (1998) to describe how states flatten messy local practices into uniform grids, cadastres, and surnames so populations can be taxed and governed. Shows up in AI as the push to make models, agents, and training data auditable: evals, traces, mech interp, and compliance frameworks all work by forcing opaque internal behavior into forms a regulator or deployer can actually see. The term has also gone mainstream in product and design circles as shorthand for making a system understandable to the people using it, usually stripped of Scott's original warning about what gets flattened in the process.
Load-Bearing
A component, assumption, or decision that the rest of the system structurally depends on: remove it and everything above it collapses. From architecture, where a load-bearing wall or beam carries the weight of the floors and roof above it, as opposed to a partition wall you can knock out during a renovation. In AI and VC, it shows up as a rhetorical precision tool: "the load-bearing assumption here is that latency drops below 100ms" or "reasoning capability is load-bearing for this agent workflow," flagging the one thing that can't be wrong without sinking the whole thesis.
Nerd Sniped
A problem so intrinsically interesting that you can't stop working on it, even when you should. Coined by Randall Munroe in XKCD #356 (2007), where a physicist gets hit by a truck while frozen mid-thought over a math problem. Shows up across AI dev culture: engineers spending three days chasing an exotic agent failure mode instead of fixing the boring data pipeline that actually breaks production, or labs sinking weeks into benchmark micro-improvements that don't translate to real workflows.
Nerf
To weaken something that previously worked well, usually quietly and without announcement. From Nerf toys and popularized in gaming culture, where a balance patch that reduces a character or weapon's effectiveness is called a nerf. In AI, devs use it when a model update makes responses more cautious or less capable, when an API tightens rate limits, or when a platform removes functionality they relied on ("love the new products but please fix the nerfed Opus 4.6").
Occam's Razor
The simplest explanation is usually the best. From medieval philosophy, it shows up every time a team blames an agent failure on emergent reasoning collapse when the real cause turns out to be a wrong file path, a truncated context window, or an API rate limit. The unglamorous explanation is almost always the right one.
Overton Window
The range of ideas society considers acceptable. From political science, it explains how fast open-weights models went from fringe ("Meta is irresponsible for releasing LLaMA") to baseline ("every serious infrastructure company needs an open-source option"), and how autonomous agents moved from science fiction to a standard shipping target inside a couple of years.
Power Law
A small number of outcomes dominate the whole distribution. From statistics and complex-systems theory, it shows up in AI market structure: OpenAI and Anthropic capture the bulk of developer mindshare while hundreds of smaller labs share the long tail, and inside any given agent stack one or two reliable tool chains handle ninety percent of the real work while the rest barely get invoked.
Principal-Agent Problem
Misaligned incentives between the decision-maker and the doer. From economics, it doubles as the core metaphor for AI alignment: the human is the principal, the model is the agent, and the work is about creating incentives, constraints, and oversight that keep the agent from pursuing the letter of an instruction while violating its spirit. Anthropic's Constitutional AI is one attempt at a durable solution.
Progressive Disclosure
Revealing complexity only when the user needs it. From UX design, it shows up in how ChatGPT hides model selection and system prompts behind a settings cog while exposing a single chat box by default, and in how Claude Code ships with opinionated defaults but lets power users override permissions, hooks, and MCP servers once they know what those are.
Reality Distortion Field
A leader's charisma makes impossible goals feel inevitable. Coined in 1981 at Apple to describe Steve Jobs, it shows up today around AI lab roadmaps where the next scheduled capability leap outpaces what the team has publicly demonstrated, and in founder pitches where the promised agent "just around the corner" reshapes hiring, funding, and policy conversations even before a working demo exists.
S-Curve
A pattern where technologies grow slowly, then rapidly, then plateau. From innovation theory, it frames the live debate about where we are on transformer scaling: some labs insist we are still in the steep middle and capability gains will keep compounding, while others point to diminishing returns from the latest frontier models as evidence the plateau is close.
Schelling Point
A natural place people coordinate without communicating. From game theory, it explains why entire founder cohorts converge on the YC SAFE, why certain stacks become defaults, and why AI researchers gravitate toward shared norms and eval suites even without explicit coordination.
Schlep Blindness
The tendency to underestimate or ignore tedious, unglamorous work that actually determines success. Coined by Paul Graham, it resonated because every meaningful problem hides a pile of unpleasant operational tasks. In AI, it shows up when teams obsess over models but underinvest in eval pipelines, monitoring, data cleaning, or the infrastructure that makes systems reliable.
Smoke Test
A quick, minimal check that a system isn't obviously broken before deeper testing. Borrowed from plumbing (pump smoke through pipes to find leaks) and electrical engineering (power on new hardware and watch for smoke), it shows up across engineering and AI. For instance, the first-hour ritual after a frontier model ships: researchers and practitioners fire a handful of known-tricky prompts (reversal curse, counting letters in "strawberry", a few private evals), often reaching a collective vibes-check on X before any formal benchmark lands.
Substrate
The underlying layer something else is built on top of. From biology and chemistry, where a substrate is the surface or material that hosts a reaction or sustains an organism, it has drifted into tech as shorthand for the platforms (foundation models, clouds, operating systems) that applications run on. Shows up sharply in AI when the substrate ships the application itself, for example Anthropic launching Claude Design, competing directly with tools like Figma built on top of its own models.
Systems Thinking
Understanding behavior by analyzing the entire interacting system. From cybernetics and organizational theory, it shows up when a model that ran fine in isolation starts producing subtly worse outputs after being wrapped in a tool-using agent, because the new failure modes come not from the model itself but from how it interacts with retries, tool outputs, and partial context. Debugging the model alone gets you nowhere.
Type Two Fun
Miserable while it happens but meaningful in hindsight. From mountaineering culture, it's the unofficial description of the weeklong refactor to rewrite a working prompt because someone insisted on structured outputs, the all-night eval run that reveals your model regressed on an edge case, and the two-day debugging of an agent that worked perfectly in dev and broke the moment real users touched it.
Yak Shaving
Getting stuck doing a chain of small tasks before you can start the real task. Originating at the MIT AI Lab, it's the lived reality of AI infra work: you set out to add a new eval, which requires updating the harness, which fails because a dependency bumped its API, which now needs a Python version upgrade, which breaks three other tools in your stack, all before you have touched the thing you actually meant to do.
Zero-Shot / One-Shot / Few-Shot
Prompting with no examples, one example, or a small handful. From early LLM research, they show up in every serious model evaluation: zero-shot results are treated as the purest measure of a model's raw capability, while few-shot prompting is what production systems actually use because worked examples in the prompt reliably beat fine-tuning for formatting and tone.
Closing
New ideas will keep pouring into this ecosystem as fast as old ones settle in. Some will flare briefly; others will become permanent fixtures of how people build and reason. That's the predictable rhythm of Silicon Valley's vocabulary: concepts surface, get stripped of preamble, and slip into everyday use almost overnight. The goal of a glossary like this isn't to capture every meme but to understand why they matter. They help builders coordinate, choose, and move: a shared language for navigating an accelerating world.