Classical Physics
Consider a physicist asked to predict how far a ball will travel if thrown into the air. The physicist will ask for the initial vector (the velocity and angle of release), plug that into an equation, and simply solve for the answer. If they want to go as far as first principles, they take the Newtonian equation for the motion in the vertical (y) direction:
y(t) = vt + 1/2at^2
Where a is the acceleration of the Earth’s gravity (-9.8m/s^2) and solve for when y(t) = 0 (when it arrives back at a height of zero, aka on the ground). Then plug t into the equation for the horizontal direction:
x(t) = vt
And get a distance.
Things are more complex in real life; we’ll return to that.
Note, though, that this problem is closely related to the science necessary to travel to the moon or calculate artillery tables, both of which were early uses of computers.
Medicine
Consider a doctor asked to treat the pain a patient has in their side. The doctor needs to examine the patient, ruling out such obvious factors as a knife sticking out from between the patient's ribs. They will poke and prod and ask questions about the pain, trying to find where it is coming from, how long it has been there, and what the patient has been eating. They may or may not order more sophisticated tests.
Ultimately, the doctor will use experience and a large number of heuristics to guess the cause (unless there really is a knife) and then apply further heuristics and learned guesses to treat the problem: aspirin, rest, change of diet, removal of a kidney stone, or the beginning of treatment for something more serious like cancer.
The new paradigm of science
The first model was widely considered the ideal way of doing science--at least until the end of the 19th century. You make a few observations, plug the data into an equation, and out pops the answer: the shell lands where predicted, Halley’s Comet returns, the plane crosses the Atlantic, Armstrong walks on the moon. It was made doubly attractive by its superficial resemblance to a mathematical proof, with its promise of absolute Truth—something analytic rather than contingent—at least after the initial determination of the empirical evidence.
By comparison, Medicine (and Chemistry, Biology, let alone Psychology, Economics, and the other social sciences) was seen as messy, intuitive, too dependent on rules of thumb that smelled of tradition (at best) or superstition (and often were). Even today, these disciplines are sometimes said to suffer from Physics-envy, the desire to be more precise, more accurate, less contingent, more mathematical.
But towards the end of the 19th century, this picture of Physics and Mathematics as purer forms of knowledge started to break down. Newtonian mechanics and optics were superseded by Relativity and Quantum Mechanics. Space was no longer absolute, gases behaved statistically, matter was both wave and particle, and most unsettlingly, there were limits on how precisely anything physical could be known—limits imposed by Heisenberg’s Principle, which contained the word “Uncertainty” right there in its name.
Even mathematics suddenly seemed less sure. First, Euclid’s self-evident axioms, the very model of absolute Platonic truth, turned out not to be universal, and then Godel showed that even logic had logical limits.
In this light, it became obvious that classical mechanics was never as deterministic as it appeared. The ball (or shell or rocket) was highly unlikely to land exactly where the calculation said it would. The angle and speed of launch might be slightly off, the powder in the gun slightly more or less than called for, the weather might move the object around in flight, and the ground was never completely flat and level. The results of artillery tables should have been expressed in probability distributions instead of exact distances.
But still, Physics worked. People did land on the moon. Quantum mechanics, expressly uncertain, was also shown to be one of the most precise of all physical theories. Nobody said the Parallel Axiom was false, simply that it depended on context in the real world.
With Physics becoming more uncertain, the other sciences started to appear more reliable precisely because they had always allowed for probability. Rules of thumb, based on empirical observation, don’t seem so bad when you start to realize they work most of the time and that there really is no alternative.
So, what does this all have to do with ethics?
Strangely, some philosophers are still looking for absolute moral truths—undeniable universal axioms of ethics. Meanwhile, the lack of such agreed-on axioms leads onlookers and some philosophers to conclude that there are no ethical rules, everything goes, or conversely, that we need to look to some revealed truth in order to avoid nihilism.
It seems rather that ethics should be more like medicine. Rules of thumb, backed by empirical evidence, taking into account all the context of the complicated situation, recognizing that sometimes we’re going to diagnose the situation wrong and/or prescribe the wrong course of action. And yet, this doesn’t mean that applying leeches and imbibing mercury are just as legitimate cures as antibiotics and vaccines.
As I’ve written elsewhere, Trolley Problems aren’t actually very useful analogies for real-world ethical problems. They assume a level of certainty that, just like in physics, never really exists.