Science Me—Ask Better Questions
Science is more than test tubes and lab coats and Einstein's bad hair day. It's actually a way of thinking used to systematically decode the natural world.
Here are the four canons of science brought to you by philosophers—the original scientists—who began solving the questions of reality long before we rolled up our sleeves and got handsy with experiments.
John Locke was a philosopher who came up with the idea that scientific truth comes from hard data. For most of history, scientific observations were made using our biological senses or, at best, basic instruments which set limits on what we could actually know about the universe. Modern technology changed all that, massively extending our data-gathering capacity with the likes of space telescopes and gene sequencers. The result: a data explosion.
We can now probe reality at unimaginable scales, allowing us to see viruses, DNA, and even individual atoms. Locke would be overjoyed by the sheer breadth and depth of empirical data we generate today. Moreover, technology delivers objective data, which is altogether more precise, scalable, and reliable than the subjective data we generate with our brains.
Empiricism is so critical to the scientific method that we must be sceptical of all claims we can't measure—like ghosts, mediumship, and telepathy. If they're not reliably detectable, they fall into the domain of belief.
Scientists love a good experiment. But before we go controlling variables and gathering data, we need to specify exactly what it is we're trying to find out. Thus, the need for a hypothesis: a testable idea that sets out a specific claim. Only then can it be proven valid or invalid through experiments.
In the year 240 BCE, most people believed the Earth was flat. The mathematician, Eratosthenes, took this falsifiable hypothesis and designed an experiment to measure the shape of the Earth. His brilliant idea was to use shadows, geography, and maths to discover the Earth is in fact a sphere, while also predicting its circumference and axial tilt with remarkable accuracy.
It all began with a specific, falsifiable hypothesis; an idea about the world that we can, at least in principle, examine to see if we're right or not. If an idea isn't falsifiable, it's a scientific cul-de-sac.
Human beings are creative storytellers. We're capable of leaping from one fantasy to another to explain, entertain, and elucidate. But since this creative drive clashes with the need for parsimony in science, we must go out of our way to analyse and challenge our theoretical claims at every turn.
Parsimony is also known as Occam's Razor, which states: "entities should not be multiplied beyond necessity". This means stripping wild hypotheses out of the equation. It's not mythological demons that's making your wife speak in tongues; it's the clinically observed schizophrenia.
Conspiracy theories get a bad rap because they're often extravagant in their assumptions. That's not to say conspiracies don't exist, but we should eliminate the more probable explanations first. This is the first step to achieving parsimony.
Parsimony is often misunderstood to mean the simplest explanation will do. Relativity, quantum mechanics, and evolution aren't simple theories—but they are the most parsimonious given what we observe about reality. The focus of parsimony is to explain complex observations with the most succinct and reasonable way possible, using only the evidence at hand.
The natural world as we observe it is a complex web of cause and effect. From the centrifugal forces that shape galaxies, to the molecular interactions between neurotransmitters in your brain. Scientific reductionism strives to analyse phenomena in terms of deterministic cause and effect; a principle which renders fate, karma, and angelic intervention moot.
Quantum physicists are wrestling with a big problem. The subatomic world does not appear to be deterministic. Instead, it appears probabilistic and random; an observation that poos all over classical physics and then rubs its face in it. How can the universe be both random and deterministic at the same time?
This is the problem that haunted Einstein. Even today, we're missing a critical clue that will lead us to a parsimonious theory. Until then, we just don't have enough pieces of the puzzle. Has anyone tried looking under the couch?
Fortunately, at the classical scale, determinism continues to drive our everyday science. The world still ticks in a predictable fashion, despite the probabilistic quantum phenomena that underlies it. We can still accurately predict solar eclipses, sine waves, social dynamics, and much in-between. Phew.
The Never-Ending Mystery of Science
These fundamental principles are at the foundation of science, on which many more specialised and nuanced principles are built. When a scientist proposes a new idea, her logic and experimental data are subjected to peer review. This adds another round of scrutiny, allowing fresh eyes to critique and evaluate the idea, a relentless scepticism that's essential to the march of scientific progress.
Peer review brings a collective intelligence, designed to protect against both innocent errors in thinking, as well as scientific fraud that can send research down the wrong path.
Revelatory research papers can drive funding into new fields, spawning more investigations, bigger data, better predictions, and better solutions. When ideas gather momentum they allow science to hone in on the truth with increasing specificity. Ultimately, historic theories may be updated and even overhauled in the light of new evidence. It's a continual error-checking process.
And yet our scientific knowledge will never be complete. To understand absolutely everything would leave no room for curiosity. This is what drives science. The never-ending mystery of the universe. We can delight in the incremental rewards, the glimpses of deeper truth, each science-driven discovery bringing new insights to the beautiful and astonishing system we call nature.