Most people learn by absorbing stories — narratives about why something happened, who succeeded, and what caused what. The shape of the story becomes the shape of their understanding. This is natural, even useful. But the fundamental failure isn't in using stories; it's in the misallocation of cause and effect. When you take a story and its outcome at face value, you inherit someone else's causal model without ever testing whether the mechanics they attributed actually drove the result.
Early aircraft designers assumed flight required flapping wings because that's how birds did it. They built elaborate contraptions with feathered, flapping mechanisms that were heavy, unstable, and dangerous. The analogy to bird flight ignored the differences in anatomy, muscle power, and scale between humans and birds. Only when engineers reframed flight as a problem in lift, thrust, drag, and weight — the underlying mechanics that actually matter — rather than imitation of birds, did powered flight d. The bird story was still applicable of course but the causal attribution was not correct.
Similarly in economics, politicians often argue that "a country's budget is like a household's budget"; therefore governments should cut spending in recessions. While some elements of this are true, it forgets a national economy can issue currency, borrow in ways households can't, and has a central bank to modulate demand. The analogy may fit some moral sentiment, but it misidentifies which parameters actually drive macroeconomic outcomes. When applied crudely, real economic damage can follow from slower recovery, unemployment, and debt-to-GDP deterioration.
The deeper problem is what happens when this compounds. A story-first individual accepting one narrative is minor in some but in practice, it becomes the crowd-sourcing of their worldviews.** This creates gravitation toward stories that confirm the ones they previously liked, or the ones others around them declared credible. Each new story becomes another data point that reinforces the existing causal model with little experimentation at verifying the parameters but taking the post-hoc narrative at surface level. The goal should be the opposite: to identify the underlying model parameters that are genuinely causal — or at minimum, strongly correlated with the outcome — not to curate narratives that feel coherent.
This is where the most important scientific principle enters: falsification. The discipline of dissecting any story or outcome into a set of discrete hypotheses that can be individually evaluated and, critically, disproven. You cannot prove a hypothesis true through confirmation alone — but a single counterexample can eliminate it. If it rains on a day when I did not sing the previous evening, then singing clearly has no causal relationship with rain. Of course, the world is messier than that: perhaps the clouds that bring rain also darken my mood enough to make me want to sing, creating the illusion of a connection where none exists. Uing this — separating genuine drivers from confounders — is the actual work. The practicality of careful observation, systematic elimination, and model parameter curation is how one formalizes what it takes to minimize long-term surprise in life.
Physics, math, and engineering often train this reflex as they strictly punish the alternative. You can't just memorize the period of a pendulum; the exam will change the mass, add air resistance, tilt the pivot, and expect you to rebuild the reasoning from scratch. If your calculation is wrong, the bridge falls. If your derivation is wrong, the simulation spits out nonsense. The field itself forces you to locate where the reasoning breaks, isolate the failing assumption, and fix it. Over time, this conditions you to treat every analogy as a hypothesis rather than a immediate proof.
Extending from the other hand, political sciences often start with concepts around realism, liberalism, constructivism which define preset lenses through which to interpret ev. Similarly, psychology starts around concepts like attachment theory, the Big Five, Milgram. These are of course valuable ideas, but they often constrain localized viewpoints taking the story at the highest level rather than reasoning bottom up. Many learn not about how to interrogate the parameters of the theory but to simply take it as a possible standalone fact. The training to match a new event to an old case and assume the dynamics will be the same can be dangerous here.
With the advent of AI and its ability to operationalize work, we are reminded of something often romanticized away: much of science is not a function of inspiration. The breakthroughs we celebrate are often the result of systematically searching the most probable areas for answers — running thousands of experiments, eliminating dead ends, narrowing the search space — until we wrangled knowledge of existence and something new revealed itself. Science is, at its core, organized persistence. AI accelerates this by orders of magne, making the systematic search faster, broader, and cheaper. But the approach that still remains in importance: knowing how to define hypotheses, which parameters to vary, and how to define what stories to stop believing.