Ask Better Questions
Science is more than test tubes and lab coats and Einstein's bad hair day. It's actually a way of thinking used to systematically decode the natural world.
This mindset is applies across all sciences, whether you're a logician who writes machine-learning algorithms, a naturalist who seeks evolutionary patterns, or an empath who researches human behaviour.
So what are the rules of science? Here are four big ones that came from philosophers. These were the original scientists, who tried to decode reality well before the explosion of modern science.
Empiricism means gaining knowledge through measurable and repeatable observation. It ensures we only draw conclusions from hard evidence.
As well as the bald one on Lost, John Locke was a philosopher who came up with the idea that scientific truth comes from hard data. For most of history, scientific observations were made using the senses, plus very basic tools, which set limits on what we could actually know about the universe.
Modern technology massively extends our experience of the world. We now have astonishing data-gathering capacity through the likes of microscopes and MRI machines.
We can now probe reality at once-unimaginable scales, allowing us to see viruses, DNA, and even individual atoms.
Locke would be overjoyed by the sheer breadth and depth of empirical data we generate today. Moreover, technology delivers objective data, which is altogether more precise, scalable, and reliable than the subjective sensory data we perceive with our minds.
Empiricism is so critical to the scientific method that we must be sceptical of claims we can't measure—like ghosts, mediumship, and telepathy. If they're not reliably detectable, they fall into the domain of belief.
Falsifiability means being able to test an idea to see how it holds up. If we can't challenge a claim, then it's too fuzzy for science to weigh in.
Scientists love a good experiment. But before we go controlling variables and gathering data, we need to specify exactly what it is we're trying to find out. Thus, the need for a hypothesis.
A hypothesis is a testable idea that sets out a specific claim. Only then can it be proven valid or invalid through experimentation or observation.
For instance, around 240 BCE, most people believed the Earth was flat. Since this was a falsifiable hypothesis, Eratosthenes designed a kick-ass experiment to measure the shape of the Earth.
Eratosthenes used shadows, geography, and maths to not only prove the Earth is a sphere, but to predict its circumference and axial tilt with remarkable accuracy.
It all began with a specific, falsifiable hypothesis; an idea about the world that we can, at least in principle, examine to see if we're right or not.
Scientific theories should offer logical and succinct explanations of natural phenomena. In other words, we mustn't run away with our wild imaginations.
Human beings are creative storytellers. We're capable of leaping from one fantasy to another to explain, entertain, and elucidate. But since this creative drive clashes with the need for parsimony in science, we must go out of our way to analyse and challenge our claims at every turn.
Parsimony is also known as Occam's Razor: "entities should not be multiplied beyond necessity". In simple terms, this means stripping wild hypotheses out of the equation. It's not mythological demons that's making your wife speak in tongues; it's the clinically observed schizophrenia.
Conspiracy theories have a bad rap because they're often quite extravagant in their assumptions. That's not to say conspiracies don't exist, but we should eliminate the more probable explanations first. This is the first step to achieving parsimony.
Parsimony is often misunderstood to mean the simplest explanation will do. Not so. Relativity, quantum mechanics, and evolution aren't simple theories—but they are the most parsimonious given what we observe about reality.
The focus of parsimony is to explain complex observations with the most succinct and reasonable way possible, using only the evidence at hand.
Determinism means all events in the universe are bound by cause and effect. Things don't happen spontaneously without some link to a preceding event.
The natural world as we observe it is a complex web of cause and effect. From the centrifugal forces that shape galaxies, to the molecular interactions between neurotransmitters in your brain. Ultimately, the principle of determinism renders fate, karma, and angelic intervention moot.
Scientific reductionism strives to analyse phenomena in terms of fundamental cause and effect. Once the equation is balanced, there's no need to load it with additional elements.
Quantum physicists are currently wrestling with a big problem. The subatomic world does not appear deterministic. It appears probabilistic and random; an observation that poops all over classical physics and then rubs its face in it. How can the universe be both random and deterministic at the same time?
This is the problem that haunted Einstein. The conflicting evidence means we're missing some critical clue that will lead us to a parsimonious theory. Until then, we just don't have enough pieces of the puzzle. Has anyone tried looking under the couch?
Outside of quantum physics, determinism is a solid principle that drives everyday science. It has been observed, tested, and proven across many scientific disciplines. And it allows us to make accurate predictions about the world, from solar eclipses, to sinewave creation, to social dynamics.
The Never Ending Mystery of Science
These fundamental principles are just the beginning. The process of peer review adds another round of scrutiny, where experts in a given specialisation try to prove your work wrong. This relentless scepticism is essential, lest generations of future scientists be sent down the wrong path.
Peer review brings collective intelligence to problems. It's designed to protect against both innocent errors in thinking—as well as scientific fraud.
Then the cycle continues. Revelatory papers can drive funding into new areas of research, spawning more investigations, bigger data, better predictions, and better solutions. When ideas gather momentum they allow science to hone in on the truth with increasing specificity. Ultimately, historic theories may be updated and even overhauled in the light of new evidence. It's a continual error-checking process.
And yet our scientific knowledge will never be complete. Not in the lifetime of our species nor, hopefully, ever. To understand absolutely everything about the universe would mark the end of curiosity. This is what drives science—the never-ending mystery of life. We can delight in the incremental rewards, the glimpses of deeper truth, each discovery bringing new insights into the beautiful and astonishing system we call nature.