The Importance of Pedigree

Consider the following thought experiment:

Suppose there is an experience machine that would give you any experience you desired. Superduper neuropsychologists could stimulate your brain so that you would think and feel you were writing a great novel, or making a friend or reading an interesting book. All the time you would be floating in a tank, with electrodes attached to your brain. Should you plug into this machine for life, pre-programming your life’s experience?

This, obviously, is a rhetorical question. And I would assume that for most people answer is a curt “No”. We think that actually playing a game is somehow superior to a mere (stimulated) experience of playing a game; even if the brain attached to electrodes can’t tell the difference. Doing something is better than the experience of doing. We just don’t want to experience things, we want to do thing and be someone.

Here’s another scenario: Person A mistakenly believes that his favorite team won the game last night, and person B correctly believes that his favorite team won. [Assume that there are no consequences of the mistaken belief.] Even if there is no difference in their subjective degrees of happiness, if you had to choose between the two, you would choose to be person B. In terms of well-being of a person, the illusion-based happiness is perceived inferior to the fact-based happiness. You don’t want to rejoice for your home-team just for the sake of being happy. You want your home-team to have actually won as well. Otherwise, it’s just not the same. (Again, even if there’s no subjective difference in what we feel from the inside.)

“Happiness, at least if it is understood to correspond to well-being, turns out not to be all in the head”, writes Leo Katz in his intriguing book Ill-gotten Gains: Evasion, Blackmail, Fraud, and Kindred Puzzles of the Law, from which these scenarios are taken. He quotes an Oxford philosopher to elaborate more on this:

I prefer, in important matters of my life, bitter truth to comfortable illusion. Even if I were surrounded by consummate actors able to give me sweet simulacra of love and affection, I should prefer the relatively bitter diet of their authentic reactions. And I should prefer it not because it would be morally better, or more noble, but because it would make for a better life for me to live.

… because it would make for a better life for me to live. Touché!

The pedigree of experience matters – that’s the lesson from these thought experiments. But alas, when it comes to the belief in God, the majority of us seem to prefer the comfortable illusion — there’s an after-life, God is watching and helping us etc. — to the bitter truth that there’s no God, and we are on our own.

The Argumentative Theory

Here’s one of the most thought-provoking quotes I read recently:

Reasoning was not designed to pursue the truth. Reasoning was designed by evolution to help us win arguments.

We think we use reason to enhance decisions making and improve knowledge. But more often that not, when we think we are reasoning, we are merely trying to rationalize a conclusion that we’ve already made. Reasoning often equates to justifying an existing belief as opposed to getting at a better belief.

A recent white-paper proposes a hypothesis that the function of reasoning is not truth-seeking but argumentative. We use reasoning to devise and evaluate arguments for persuading others and ourselves. They conclude that while reasoning makes human communication more effective and potent, it not only falls short of delivering rational beliefs and rational decisions reliably, but it may even be detrimental to rationality in many cases.

This explains why humans reason rather poorly, and are vulnerable to cognitive biases such as confirmation bias.

***

The quote is from the recent edition of Edge online magazine, and the white-paper is available here for free.

On Bullshit

In a wonderful passage from On Bullshit,  Harry Frankfurt explains how bullshitting is not the same as lying:

It is impossible for someone to lie unless he thinks he knows the truth. Producing bullshit requires no such conviction. A person who lies is thereby responding to the truth, and he is to that extent respectful of it. When an honest man speaks, he says only what he believes to be true; and for the liar, it is correspondingly indispensable that he considers his statements to be false. For the bullshitter, however, all these bets are off: he is neither on the side of the true nor on the side of the false. His eye is not on the facts at all, as the eyes of the honest man and of the liar are, except insofar as they may be pertinent to his interest in getting away with what he says. He does not care whether the things he says describe reality correctly. He just picks them out, or makes them up, to suit his purpose.

Does this make bullshitting more or less despicable than lying? I guess that depends on whether it is deliberate or not.

[Hat Tip: Sam Harris]

The Moral Landscape

About a year ago, I posted an audacious TED video by Sam Harris in which he argued that science can play a role in understanding and defining morality. This was a retort to the widely held belief that science can tell us what is, but not what ought to be — that you can’t derive ought from is. Now Mr. Harris has published a book on this subject: The Moral Landscape: How Science Can Determine Human Values, which I just finished reading. Unlike his earlier books he is treading uncharted territories this time, but the book does live up to the standards set by his earlier work (The End of Faith and Letter to a Christian Nation.)

Harris’ core argument is this: Ultimately, the question of values and morality are really questions about human well-being. And the well-being of conscious creatures can be scientifically understood. Think neuroscience and neurophysiology which can observe and explain emotional impulses (happiness and suffering) in the brain as a result of outside stimulus (violence, mercy, co-operation etc.) By understanding how actions relate to impulses in the brain we can categorize those actions into two groups: actions that elevate human well-being, and those that hamper it.

We do have to assume, however, that improving the well-being of conscious creatures is intrinsically good. But once you make this assumption (which seems an obvious and reasonable one to make), science can step in and help us improve and maximize the human well-being.

This claim is a controversial one (especially among moral relativists). For centuries, science has busied itself in discovering empirical truths about nature, and stayed away from giving out opinions about values and morality — as if morality belonged to a realm beyond the reach of science. Those who believe that morality comes from religion have the biggest issues with this claim. If what Sam Harris proposes is right, this would increase the ever expanding sphere of science even further and there will be almost nothing left for religion to do. Philosophers like David Hume and Stephen Jay Gould have long argued that science and religion belong to two different and non-overlapping magisteria: science deals with discovering facts that help us understand the world, while religion deals with values and moralities. If, instead of Jesus and Lord Rama, we start looking up to science for  moral guidance, then wouldn’t that give a pink slip to religion? This fear seem to drive disagreements (with Harris)  in the religious circles.

The second, and more rational, critique comes from scientists and philosophers. The crux of their counter-argument is: how do you measure well-being? In order to improve something, we need a measurement scale. What’s the unit of well-being? To this, Harris responds with an example of health. There’s no specific metric in which we can define health. But that doesn’t stop us from using science to try to improve health. There might be various different ways (as opposed to one objective and universal way) to improve health, but that doesn’t stop us from saying that there are answers to the question related to health. Similarly, there might be several peaks of well-being on the moral landscape – more than one ways to be happy – but that shouldn’t stop us from trying to define what those peaks are, and how we can attain by the means of science. There might be multiple answers to questions about morality (different people find happiness in different things) but not-having-an-objective-answer shouldn’t deter us from studying what those answers are and how they affect our well-being.

Whether one agrees with Harris’ stand or not, this book is a very interesting read; he does raise some very convincing points, arguing with such specificity, that make you think hard and deep. The book has already stirred a lot of intellectual discussions (see here, here and here); many don’t agree with him completely but they all concur that The Moral Landscape is quite a stimulating and novel idea.

The Trolley Problem

Consider the following two scenarios:

Version 1 – A trolley is hurtling out of control along a track, bearing down on five people who have been tied to the track by a mad philosopher. You can avert the disaster by flipping a switch that diverts the trolley to another track. Unfortunately, there is one man tied to that other track. Is it morally permissible (or for that matter morally mandatory) to flip the switch?

Version 2 – A trolley is hurtling out of control along a track, bearing down on five people who have been tied to the track by a mad philosopher. You can stop the trolley by pushing a man in front of it. Is it morally permissible (or for that matter morally mandatory) to push the man in front of the trolley?

Most people would flip the switch in the first scenario, but wouldn’t push the man in the second version (according to a survey). Both scenarios have the exact same consequences, but for some (innate) reasons we prefer to kill the man by flipping a switch but not by pushing him.

Here’s slightly more convoluted versions of these hypotheticals:

Version 1 (a) – A trolley is hurtling out of control along a track, bearing down on five people who have been tied to the track by a mad philosopher. Across the track is a man sitting in front of his house, completely unaware of the trolley situation. If you say “hey, come here” to the man, he will walk towards you and in the path of that trolley, be killed and five lives will be saved as a result. You assess the situation and decide to call out. The man gets killed, and five people are saved.

Version 2 (a) – A trolley is hurtling out of control along a track, bearing down on five people who have been tied to the track by a mad philosopher. Across the track is a man walking towards you and in the path of that trolley, completely unaware of the trolley situation. If you shout “hey, watch out”, he will stop walking, be safe and five lives will be lost as a result. You assess the situation and decide not to call out. The man gets killed, and five people are saved.

Based on a survey (when people were asked to judge this behavior on a five point scale: from “morally impermissible” to “morally virtuous”), majority of the people believe that the harm caused by action (calling out) is more culpable than harm caused by inaction.

These moral biases and illusions (or contradicting moral instincts) don’t just exist in such philosophical and ethical experiments. For example, we seem to be much more comfortable with stealing music online (which involves hitting few keys on a computer keyboard) than stealing a music CD from a store. (Flipping a switch versus pushing a man.) Also, we are more forgiving when a terminally ill patient dies because of withheld medicine than when a doctor kills the patient by injecting a lethal dose of painkiller. (Action versus inaction.)

***

Talking about moral illusions, here’s one more (I promise, the last one!) moral scenario:

The Headache Problem – A billion people are experiencing fairly minor headaches, which will continue for another hour unless an innocent person is killed, in which case they will cease immediately. Is it okay to kill that innocent person?

Our moral instinct says that the answer is, quite obviously, ‘no’.

But here’s a counter-argument from an economist’s point of view: The answer is ‘yes’. We know that nobody wants to pay a dollar to avert a one-in-a-billion chance of death. (Surveys about our willingness to pay for auto safety devices have shown this.) Now, most people will happily pay a dollar to cure a headache. Which proves that most people think curing headache is more important than avoiding a one-in-a-billion chance of death.

This probably seems counter-intuitive, but think about how little we care about the risk of death when we drive. The risk of dying in a road accident in U. S. is 1 in 6,500. Every time you take your car out to drive to the grocery store, people (including you) become slightly more likely to die in a road accident. We know this. But we value our convenience more than the increased chance of death. We, consciously or otherwise, take that risk. As Steven E Landsburg argues in his book The Big Questions: Tackling the Problems of Philosophy with Ideas from Mathematics, Economics, and Physics:

[W]e all agree to kill random people all the time. We drive, install swimming pools, use drain cleaners, and drink tequila, knowing with certainty that some number of other people will die as a result. People have died so that other people can drive to the opera. Why shouldn’t they die to cure other people’s headaches?

Does that make sense, or do you think there’s a flaw in the argument?

***

The first two scenarios and The Headache Problem are taken from Steven E Landsburg’s book The Big Questions . And the other two versions of The Trolley Problem from Jonah Lehrer’s blog The Frontal Cortex.

We Must Converge

Sam Harris delivers an impressive talk in which he argues that science does have answers to questions about morality.

Most people have skepticism about science’s role in leading us to moral values. That science can tell us what is, but not what ought to be – is a widely held belief. But Harris claims that values are a certain kind of empirical facts. They are facts about the well-beings of conscious creatures. And science can help us to discover these factual values.

Thought-provoking and audacious as they might be, I don’t completely agree with Harris’ arguments about the objectivity of morality. The sufferings and well-beings of conscious creatures might very well be empirical facts, and science can be a very useful, even vital, tool to understand these facts, but that still doesn’t tell us how we should act or feel after acquiring this knowledge.

However, I do agree that we must converge regarding the answers we give to the most important questions about human life. And in order to converge, we must agree first that there are answers. But as a moral relativist, I don’t believe there are ultimate objective moral values waiting to be discovered (like scientific facts). I can comprehend the universality of morality but not its absoluteness.

***

While talking about Sam Harris, here’s an earlier post of mine: Religious Moderates.

Also, read his debate with Rick Warren (the pastor who gave invocation to Obama’s presidential inauguration): The God Debate. A couple of excerpts below:

Sam, is there a God in the sense that most Americans think of him?

SAM HARRIS: There’s no evidence for such a God, and it’s instructive to notice that we’re all atheists with respect to Zeus and the thousands of other dead gods whom now nobody worships.

And,

The core problem for me is divisive dogmatism. There are many kinds of dogmatism. There’s nationalism, there’s tribalism, there’s racism, there’s chauvinism. And there’s religion. Religion is the only sphere of discourse where dogma is actually a good word, where it is considered ennobling to believe something strongly based on faith.

Seekers of Depth and Profundity

In a thought-provoking and stimulating book, Identity and Violence: The Illusion of Density, Amartya Sen provides a powerful critique to the current trend of analyzing modern conflicts as “clash of civilizations” or clash between cultures – i.e., the tendency to look at human beings as distinct groups rather than individuals. He argues, quite convincingly, that the “prospect of peace in the contemporary world may well lie in the recognition of the plurality of our affiliations” rather than communitarian thinking that crudely categorizes people into contending groups by assigning them a singular one-dimensional (often religious, cultural, or ethnic) identities.

This omnipresent phenomenon of thinking-in-terms-of-groups reduces people to confined unidimensional space. Be it “the Islamist world” against “the Western world”, the Hindus against the Muslims, the Tamils against the Sinhalese, or the Marathi manoos against the “non-Marathis”, this us-against-them rhetoric is very effective in provoking hatred (that easily gets manipulated by vote thirsty politicians). Why, one would think, this communitarian approach is so popular? Sen explains:

It is not hard to understand why the imposing [communitarian or] civilizational approach appeals so much. It invokes richness of history and the apparent depth and gravity of cultural analysis, and it seeks profundity in a way that an immediate political analysis of “here and now” — seen as ordinary and mundane — would seem to lack.

We are the “seekers of depth and profundity”. We try to find greater meaning and higher purpose in not only conflicts or struggles (as mentioned above), but also in the very existence of life itself.

Which brings me to the topic of my last post (Divine Justice?): how this tendency increases our proclivity towards unscientific convictions. We want to believe that life has a greater divine purpose. But if there’s one lesson that evolution teaches us, it is that the human beings are, like monkeys, fishes, birds, ants, worms, roaches, and bacteria, a product of a random, unsupervised and impersonal process. Darwin’s evolution is a pink slip to the benevolent creator, to the purposefulness of life.

Depressing? It shouldn’t be. Consider the following passage from Richard Dawkin’s Unweaving the Rainbow:

To live at all is miracle enough. – Mervyn Peake, The Glassblower

We are going to die, and that makes us the lucky ones. Most people are never going to die because they are never going to be born. The potential people who could have been here in my place but who will in fact never see the light of day outnumber the sand grains of Arabia. Certainly those unborn ghosts include greater poets than Keats, scientists greater than Newton. We know this because the set of possible people allowed by our DNA so massively exceeds the set of actual people. In the teeth of these stupefying odds it is you and I, in our ordinariness, that are here.

And here’s a short interview between two of the greatest living stalwarts of Darwin –Richard Dawking and Daniel Dennet:

Hallelujah for the universe!

Cultural Pessimism

Cultural pessimism has existed as long as culture itself. Just few weeks ago, I met with some friends over a party who bemoaned how new technologies, like GPS for instance, have made us less “intelligent” as compared to the prior generation(s) that did not have access to such technologies and had to rely on their own intuitions, knowledge and other (non-technical) resources. Although we didn’t talk about cultural or economic degradation, but this “google-makes-us-stoopid” mindset can be observed and generalized into these paradigms as well. The general belief and conviction is that things are going from good to bad, or bad to worse.

In a thought-provoking book The Myth of Rational Voter, economist Bryan Caplan calls it “pessimistic bias”. Virtually every generation has believed that people are not up to the standards of their parents and grandparents. Glorifying the past, and looking down at the present (and the future) is probably going on ever since the first caveman settled in a cave!

It is not improbable conjecture that the feeling that humanity was becoming over-civilized, that life was getting too complicated and over-refined, dated from the time when the cave-men first became such. It can hardly be supposed – if the cave-men were at all like their descendants – that none among them discoursed with contempt on the cowardly effeminacy of living under shelter or upon the exasperating inconvenience of constantly returning for food and sleep to the same place instead of being free to roam at large in wide-open spaces. [From Primitivism and Related Ideas in Antiquity, by Lovejoy and Boas]

In reality, the effects of technology and industrial progress on our collective intelligence, economy and culture have hardly been detrimental. Consider, for example, this question by Steven Pinker from the recent issue of Edge online magazine:

Take the intellectual values that are timeless and indisputable: objectivity, truth, factual discovery, soundness of argument, insight, explanatory depth, openness to challenging ideas, scrutiny of [perceived] dogma, overturning of myth and superstition. Now ask, are new technologies enhancing or undermining those values?

The answer is clear that the new technologies are, in fact, enhancing those core intellectual values. Still, the pessimistic illusion prevails – probably because it has strong roots in the human nature itself. In this decade-old article, Tyler Cowen defends against this myth and explores some reasons behind this wide gap between objective conditions and subjective perceptions:

It is easy to perceive the loss of what we know and harder to discern new developments and surprises. Even if long-term trends are positive, culture may appear to be deteriorating.

Observers often judge present culture against the very best of past culture, causing the present to appear lacking in contrast. But comparing the best of the past against the entirety of the present is unfair. No matter how vital contemporary culture may be, our favorite novels, movies, and recordings were not all produced just yesterday.

The past is always going to contain more accumulated achievements than a particular point in time (i.e. the present). Hence, present almost always pales in comparison to the ‘good old days’. Moreover, strong forms make us “open minded” to paranoid fantasies:

Some part of human nature connects with the apocalyptic. Time and again, pessimists among us have envisioned the world going straight to hell. Never mind that it hasn’t: A lot of us braced for the worst. Whether the source is the Bible or Nostradamus, Thomas Malthus, or the Club of Rome, predictions of calamity aren’t easily ignored, no matter how many times we wake up in the morning after the world was supposed to end. [Cox and Alm]

To end this post on a positive note, check out (1) the Flynn Effect – the consistence rise of I.Q. scores over generations, and (2) this illuminating TED talk (video below) by Steven Pinker in which he convincingly argues that we are probably living in the most peaceful time in our species’ existence!

If you liked this post, you might enjoy some of my older related posts:

The Myth of Rational Voter

Cognitive Biases and Nudge

Cognitive Illusions

Beyond Reasonable Doubt

In an extract from his upcoming book, The Greatest Show on Earth (that I am eagerly awaiting to read) Richard Dawkins explains that it’s the ‘proof beyond doubt’ that established a scientific theory as a “fact” – unlike, say in Mathematics, where a rigorous, impeccable, ultimate proof is (a) possible, and (b) required to prove a theorem.

Why, then, do we speak of “Darwin’s theory of evolution”, thereby, it seems, giving spurious comfort to those of a creationist persuasion — the history-deniers, [] — who think the word “theory” is a concession, handing them some kind of gift or victory? Evolution is a theory in the same sense as the heliocentric theory. In neither case should the word “only” be used, as in “only a theory”. As for the claim that evolution has never been “proved”, proof is a notion that scientists have been intimidated into mistrusting.

Influential philosophers tell us we can’t prove anything in science.

Mathematicians can prove things — according to one strict view, they are the only people who can — but the best that scientists can do is fail to disprove things while pointing to how hard they tried. Even the undisputed theory that the Moon is smaller than the Sun cannot, to the satisfaction of a certain kind of philosopher, be proved in the way that, for example, the Pythagorean Theorem can be proved. But massive accretions of evidence support it so strongly that to deny it the status of “fact” seems ridiculous to all but pedants. The same is true of evolution. Evolution is a fact in the same sense as it is a fact that Paris is in the northern hemisphere. Though logic-choppers rule the town, some theories are beyond sensible doubt, and we call them facts. The more energetically and thoroughly you try to disprove a theory, if it survives the assault, the more closely it approaches what common sense happily calls a fact.

Read the whole thing here.

Also see my earlier post Science Never Proves Anything where I argue how all scientific truth is provisional.

Science Never Proves Anything

The validity of one the most (if not the most) beautiful and groundbreaking scientific discoveries of all time – Darwinism – is often disputed with a common argument that it is, after all, just a theory. This not-so-rare refrain, especially popular among Creationists, is based on a flawed notion, a misunderstanding of what a scientific theory actually is.

Here’s a definition from Wikipedia: “A scientific theory is a well supported body of interconnected statements that explains observations and can be used to make testable predictions.”

There are two interesting points to make here:

(1) In science, a theory is much more than a speculation, much more than a mere hypothesis. (This might seem trivial to some, but the creationists don’t seem to get this and often dismiss the theory of evolution based on that convenient misunderstanding.)

In a general sense, a theory is considered to be more like a ‘point of view’ or a hypothesis, far away or off from a fact. But in science, a theory has a much stricter connotation. A scientific theory is much closer to fact – often as close as we can get to the fact.

As Jerry Coyne explains in Why Evolution Is True — for a theory to be considered scientific, it must be testable and make verifiable predictions. A theory is considered “true” when it is proved repeatedly, when we have accumulated enough evidences in its support. There’s no tipping point, after which a scientific theory becomes a fact.

And that brings me to the second point:

(2) Taking another quote from WEIT: All scientific truth is provisional, subject to modification in light of new evidence. There’s no alarm bell that goes off to tell scientists that they’ve finally hit the ultimate, unchangeable truths about nature. Scientists, unlike zealots, can’t afford to become arrogant about what they accept as true.

In a brilliant post (that blew my mind away) Sean Carroll writes about a very wrong way of thinking about science: “the idea that true and reliable knowledge derives from rigorous proof, and anything less than that is dangerously uncertain.” Referring to an example of Einstein’s theory of General Relatively, Correl writes:

Even when we do believe [a scientific] conclusion beyond any reasonable doubt, we still understand that it’s an approximation, likely (or certain) to break down somewhere. There could very well be some very weakly-coupled field that we haven’t yet detected, that acts to slightly alter the true behavior of gravity from what Einstein predicted. And there is certainly something going on when we get down to quantum scales; nobody believes that GR is really the final word on gravity. But none of that changes the essential truth that GR is “right” in a certain well-defined regime. When we do hit upon an even better understanding, the current one will be understood as a limiting case of the more comprehensive picture.

Scientific theories are never proven completely. They are supported by facts, but they never become (ultimate, unchangeable) facts. As Carroll writes, we have to take “I believe x” to mean, not “I can prove x is the case”, but “it would be unreasonable to doubt x.” The absence of a reasonable doubt (based on evidences) is what makes a scientific theory “true”. But the doubt is never cleared completely – we can’t be absolutely sure about anything.

theory2Also see Evolution as Theory and Fact entry on Wikipedia.

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Join 61 other followers

On Twitter

Error: Twitter did not respond. Please wait a few minutes and refresh this page.

Categories

Follow

Get every new post delivered to your Inbox.

Join 61 other followers