Think of mental models as a set of lenses to view the world. They’re extremely useful to reason about and look at seemingly intractable problems. They’re also easy to forget in the moment. And more importantly, not all models are effective against all sorts of problems. Knowing which mental model to apply to a particular set of problems is still an art that you get better with practice. So, try and use them as a sort of checklist when dealing with a complex problem. And finally, here is non-exhaustive list of some of the mental models that I find useful (I’ll keep this list updated as I find more)
The just-get-started method
It's this notion that just starting to work on a small, concrete, finishable problem puts your consciousness in a productive state
Corollary: Just do something concrete. Anything. Do your laundry, or clean your desk, or add a single unit test. Just do something.
The LRU prioritisation method
Since you can only work on one problem at a time (Your consciousness can only focus on one task at time), you should just pick the most important problem, work on that, and ignore everything else.
The teaching method
It's this notion that teaching the basics is an excellent method for generating profound new ideas, and for putting consciousness in a productive state.
Corollary: If you’re stuck, put yourself in a position where you have to teach someone the basics.
The observation that humans are overly optimistic when predicting success of their undertakings. Empirically, the average case turns out to be worse than the worst case human estimate.
Corollary: Be really pessimistic when estimating. Assume the average case will be slightly worse than the hypothetical worst case.
Note: While I agree with the general conclusion of planning fallacy, I tend to disagree with the reasoning for the same. It's not that humans are bad at estimating their undertakings, it's just that we have more nonlinearities in today's world (more on this later).
The observation that many hard problems are best solved when they’re addressed backward. In other words figure out what you don’t want, avoid it, and you’ll get what you do want.
Corollary: Find out how people commonly fail doing what you do, and avoid failing like them.
Bias for action
In daily life many important decisions are not that expensive/easily reversible. It’s not enough to have information – it’s crucial to move quickly and recover if you were wrong.
Idiom: Always default to action.
Idiom: The best thing you can do is the right thing, the next best thing is the wrong thing, and the worst thing you can do is nothing.
Strategy and tactics
Usually decisions tend to fall into one of those two categories. Strategic decisions have long-term, gradual, and subtle effects. They're like overarching plans or sets of goals. And changing strategies is like trying to turn around a ship—it can be done but not quickly. Whereas, Tactical decisions are encapsulated into outcomes that have relatively quick binary resolutions (success or failure).
Example: Picking a programming language is a strategic decision whilst choosing how to learn that programming language is more like a tactical decision.
The Map is not the Territory
The map of reality is not reality. Even the best maps are imperfect simply because they are reductions of what they represent. If a map were to represent the territory with perfect fidelity, it would no longer be a reduction and thus would no longer be useful to us. A map can also just be a snapshot of a point in time, representing something that no longer exists. This is important to keep in mind as we think through problems and make better decisions.
First Principles Thinking
Basically you reason complicated problems by separating the underlying ideas or facts from any assumptions based on them. What remains are the essentials. If you know the first principles of something, you can build the rest of your knowledge around them to produce something new. It's one of the best ways to reverse-engineer complicated situations and unleash creative possibility.
It's a process of not only considering the immediate consequences of your actions, but the subsequent effects of those actions as well. This puts you ahead of most people and possibly saves a disaster.
Example: If suddenly Electronic self-driving vehicles take over, there will be like a 50% reduction in gas oil consumption (first order thinking). Now, if you take a step further and think about the second order consequences of it, you can say that it might affect the tobacco industry a lot as well, as half of US tobacco sales happen at gas stations, and there are meaningful indications that removing distribution removes consumption.
Essentially using probability theory to estimate the likelihood of any specific outcome. It's one of the best ways to improve the accuracy of any decision.
The Bayesian method is a method of thought whereby one takes into account all prior relevant probabilities and then incrementally updates them as you get newer information. This is one of the best ways to improve our accuracy of our decisions in a non-deterministic world and one of the easiest things to overlook.
Law of Large Numbers
One of the fundamental underlying assumptions of probability is that as more instances of an event occur, the actual results will converge on the expected ones.
Example: We know that the probability of a coin landing heads or tails is 0.5. Now, If I toss the coin for 5 times, I can't be certain about what side the coin will land on, but If I toss the coin 100 or say 1000 times, I can say with certainty that it'll have landed 50 or 500 times on head.
One of the most common processes that does not fit the normal distribution is that of a power law, whereby one quantity varies with another’s exponent rather than linearly.
Example: The Richter scale describes the power of earthquakes on a power-law distribution scale: an 8 is 10x more destructive than a 7, and a 9 is 10x more destructive than an 8. The central limit theorem does not apply and there is thus no “average” earthquake. This is true of all power-law distributions.
Regression to the Mean
In a normally distributed system, long deviations from the average will tend to return to that average with an increasing number of observations: the so-called Law of Large Numbers. We are often fooled by regression to the mean, as with a sick patient improving spontaneously around the same time they begin taking an herbal remedy, or a poorly performing sports team going on a winning streak. We must be careful not to confuse statistically likely events with causal ones.
Vifredo Pareto observed that a small amount of some phenomenon causes disproportionately large effects.
Corollary: 20% of the effort can give you 80% of the result.
The change in utility from the change in consumption of a good. Marginal utility usually diminishes with increase in consumption.
Example: The first iPhone will be significantly more impactful in your life than the second one.
An observation that humans tend to develop a preference for things, people, and processes merely because they are familiar with them.
Corollary: Merely putting people in a room together repeatedly, giving them a shared direction, symbology, and competition will create a group with very strong bonds.
We are wired to respond to storytelling (we have this narrative instinct). A story arc is a way to structure ideas to tap into this response, typically by describing a change in the world.
Example: Once upon a time there was ___.
Nonviolent Communication (aka NVC)
A communication framework that allows expressing grievances and resolving conflicts in a non-confrontational way. Structuring difficult conversations as described in NVC makes the process dramatically less painful. NVC contains four components: a. expressing facts, b. expressing feelings, c. expressing needs, and d. making a request.
Example: You didn’t turn in the feature yesterday. When that happened I felt betrayed. I need to be able to rely on you to have a productive relationship with you and my superiors. In the future, could you notify me in advance if something like that happens?
It's the range of ideas a particular group of people will accept. Ideas have a spectrum of acceptance from policy, to popular, sensible, acceptable, radical, and unthinkable.
Corollary: you need to be sensitive to the Overton window when presenting the group with cultural changes.
Power of defaults
The observation that people favour the familiar over novel places, people, things, and processes.
Economies of scale
The advantages due to size or scale of operation, where cost per unit decreases with increasing scale.
Chaos Dynamics (Butterfly Effect)/ (Sensitivity to Initial Conditions)
It's this notion that small changes in initial conditions have massive downstream effects.
Corollary: Some aspects of physical systems or social systems are fundamentally unpredictable.
Via Negativa – Omission/Removal/Avoidance of Harm
In many systems, improvement is at best, or at times only, a result of removing bad elements rather than of adding good elements.
Example: Diet is more effective at losing weight than working out.
The Lindy Effect
The notion that if a non-perishable object or idea has lasted for X number of years, it would be expected to last another X years.
Corollary: Older books are more robust than the new ones.
Example: If a book has lasted for say 200 years, it is expected to last another 200 years
We tend to connect dots backwards and form a narrative stating that we knew it all along. But we tend to forget that we simply didn't have the information available to us at that moment. It's nearly impossible to turn back the clock mentally.
Doing one thing means not being able to do another. We live in a world of trade-offs, and the concept of opportunity cost rules all.