10 December 2012

A Bird and a Computer in the Brain (Two Systems of Thinking)


The popular belief is that people are fully rational, that they do all kind of cost-benefit analyses and somehow always reach the optimal conclusion. As you might have learned by now, this assumption is deeply flawed. People can make rational judgments, but more often they don’t.

In the field of decision making psychology there is a widely accepted view on how people think and make judgments. This view is not unchallenged, but overall it is widely accepted. It is called the “dual-system” judgment. As its name says, it assumes the existence of two major ways of thinking.
The general terminology for these ways of thinking is quite simple: “System 1” and “System 2”. To better illustrate how these systems of thinking function I’ll call them “The bird brain” for System 1 and “The computer brain” for System 2.

Let me describe each system (brain) and then I’ll present some considerations on how they interact.

The “Bird Brain” or “System 1” has the following characteristics:

The “bird brain” is very fast and intuitive. It bases its judgments on associations acquired through experience. These associations are based on similarity with previously encountered situations or prototypes. For example if the “bird brain” sees a man dressed up in an expensive suit with an expensive watch and a fancy car, it will infer that this person is rich or a corporate professional. The rationale is that this person looks like a rich person / corporate professional.

The “Bird brain” perceives the environment in a quasi-statistical manner. In other words, it does not use statistical descriptions, but rather it uses inaccurate subjective inferences on the environment. For example the “bird brain” will infer that there are more murders than suicides because of the prevalence in memory of the instances of murders (news and movies) and the lack of instances in memory of suicides. In fact there are more suicides than murders.

The processing of information done by the “Bird brain” is based on heuristics or in other words rules of thumb. For example the “bird brain” will want a discount and not a good price. For the fast associative system, a discount is equivalent with a good price, but it is not necessarily so. A discounted price can be higher than the regular price.

The “bird brain” is relatively undemanding of cognitive capacity in the sense that it is effortless. Intuitive and associative judgments performed by system 1 are done without significant use of mental and subsequently bodily resources. The brain is the biggest user of metabolic energy resources and most of these resources are not consumed by the “bird brain”.

The “bird brain” is activated automatically in the sense that it is always functioning. It is the default way of processing information / making judgments. At the same time, the “bird brain” can be over ridden by the “computer brain”.  

The “Computer brain” or “System 2” has the following characteristics:

The “computer brain” is slow and analytical. It bases its judgments on rules acquired through culture or formal learning and tries to identify structures in the environment. A very good example of “system 2” reasoning is engineering planning and design. When an engineer designs something he is making analytical reasoning using rules learned through formal education, based on established structures identified in the environment such as knowledge on characteristics of materials.

The “computer brain” is controlled and consists of explicit thought processes. In addition, “computer brain” processes are conscious. Unlike the “bird brain” unconscious processing, when someone is doing judgments in the “computer brain” mode, that someone is aware that he or she is thinking critically.

The “computer brain” is demanding on cognitive capacity and metabolic resources. Reasoning using the “system 2” draws on cognitive and energy resources. In other words, using the “computer brain” leads to fatigue and may end up in exhausting limited energy resources.

The general belief is that we operate on the “Computer brain”, but the reality is that most of human decisions and subsequent behavior are dominated by the “Bird brain”. The truth is that the “Bird brain” is overall very effective in the sense that it leads to decisions that are not necessarily optimal, but good enough. Moreover, the “bird brain” is very good at fulfilling evolutionary goals of survival, finding good mates, investing in children and so on.

What is intriguing is that the world in which we live now is significantly different from the world in which humans have evolved for millions of years. Now we have to make choices about financial products, buy computers and make long term plans. In these less than familiar (from an evolutionary perspective) contexts, the “bird brain” fails to provide optimal solutions.

It is obvious that the two systems of thinking do not act independently, rather they interact. It is possible in certain situations and states that the “bird brain” is the only one functioning, but more often than not, the two systems interact. Next I’ll present some of the most common situations of interaction between the “bird brain” and the “computer brain”.

As mentioned earlier, the default way of mental processes is the “bird brain” mode. At the same time, the “computer brain” can be “switched on” and it may overrule the “bird brain”. For example, the “bird brain” might say “buy the most expensive product assuming that it is also the best”. When the “computer brain” is switched on it might say “wait, look at the attributes of the product and pick the one that has the combination of attribute levels that best suits your need”. Subsequently, the person making the purchase will go through the mental effort of doing this analysis.

Another situation apart from the aforementioned one is when the “bird brain” thinking leads to a conclusion (or behavior) and the “computer brain” is switched on and gives a different conclusion. It is possible that these two different conclusions to be in conflict with each other. In the previous example the “computer bran” took control, but it is highly possible that the interaction between the two systems to simply result in conflict and no action to be taken. Similarly a person might implement the result of the “bird brain” while knowing that the “computer brain” gives a different opinion.

A third way of interaction between the “bird brain” and the “computer brain” is when a person tries (wants) to make a sound judgment. This implies the use of the “computer brain”. At the same time, the “computer brain” will work using inputs from the “bird brain”. A very good example for this is “arbitrary coherence” and I will explain it a bit later.

A fourth way of interaction between the “bird brain” and the “computer brain” is when a person makes a decision or exhibits a behavior based on the “bird brain” thinking mode. After the decision is made or the behavior exhibited, that person will try to make sense of his or her actions and will use the “computer brain” to justify the actions or decisions. For example if someone makes a purchase based on “bird brain” mode thinking such as buying a very expensive coffee making machine, when faced with the consumed fact, he or she will use the “computer brain” to come up with (solid) reasons for the purchase such as “it is of high quality and will last for a long time”.

To sum up, the “dual system” judgment view states that people make judgments in two modes – system 1 (bird brain) and system 2 (computer brain). The judgments made by System 1 (bird brain mode) are fast, associative, uncritical, heuristic based and effortless. The judgments made by System 2 (computer brain mode) are slow, computational, critical, rule-based and effortful. System 1 (bird brain) is quite good at ensuring the accomplishment of evolutionary goals such as survival and perpetuation of the specie. At the same time the “bird brain” is quite bad at making complex judgments that we face in the modern world.

The two systems (bird and computer brain modes) interact. This interaction can result in: (1) the computer brain overruling the “bird brain”; (2) it can result in mere cognitive conflict; (3) it can result in using the “computer brain” mode, but functioning on (flawed) “bird brain” inputs; (4) it can result in the “computer brain” creating reasonable explanations for actions performed under “bird brain” mode.    

The default way of reasoning is the “bird brain” mode.

Like it?  Spice Up  Your Business

Despite Seldom Encounters With Foreigners or Immigrants Some People Have Strong Xenophobic and Racist Attitudes… It’s Not Despite; It’s Because (9)


The idea of this post came while I was chatting with a Dutch gentleman who mentioned several times that people in the East of The Netherlands are much more xenophobic and racist than people in the West of the Netherlands. Now, to better understand the context of this statement, you have to know that The Netherlands is quite a small country and from East to West it spans over approximately 250 km, which is not that much. At the same time, the biggest cities (and there are quite many) are in the Western part of the country. Here there are big cities such as Amsterdam, Rotterdam and The Hague.

As you might have guessed already, most foreigners and immigrants are established in the western part where the major cities and industries are.

The exact extent of xenophobic and racist attitudes among people in the eastern part of the country is not known to me. At the same time, I have heard this from more than three people in The Netherlands, so a grain of truth must be there.

Now, let’s see how this works. As the story goes, people who have met very few foreigners and immigrants have strong negative attitudes toward them. This is quite surprising since these people have attitudes about something that they don’t know too much about. To the gentleman that told me this story this was very surprising. In addition to his surprise was the fact that people who live in the areas where there are many foreigners and immigrants have more favorable attitudes towards these groups.

To the naïve observer this is really surprising; people who know very little about foreigners and immigrants have strong negative attitudes towards these groups, while people who know considerably more about these groups have more favorable attitudes towards them.

The surprise comes from the (flawed) assumption that in order to have a strong attitude one has to have solid knowledge on the subject. This assumption is coherent (seems to make sense), but is not necessarily true. In some cases having a lot of knowledge on a topic leads to having a strong attitude towards that topic (thing).

At the same time, in order to have an attitude it is not necessary to have knowledge on the subject of the attitude. For example if you go on the street and start asking people what do they think about human life in space (such as on the International Space Station), you will get a lot of opinions. We have to acknowledge that most people (including ourselves) have very little knowledge on human life in out of space. However, knowing very little on something does not directly imply that we have no opinion on the topic or we have no attitude towards it.

If we agree that in order to have an opinion or an attitude towards something we need not have a lot of knowledge on it, then the question that arises is why have a negative attitude? The answer is quite simple and in order to find it, I suggest taking an imaginary trip back in time to the times of human evolution.

Going a few million years back in time, we will see that humans (or pre-humans) were living in small and isolated communities. As Geoffrey Miller says, most likely people from one community would not meet people from another community unless they would go to war against them. Living in these small isolated communities lead to the evolution of “fear of new things”.

The evolutionary rationale of “fear of new things” is quite simple. On all the things that are “old” namely that I know I have knowledge if they are dangerous to me (and to the community) or if they are not. In other words, I know that a cat is more or less harmless; I know that a lion is very likely to be harmful; I know that people who look like my and speak like me are (most likely) not going to kill me. On the things that are “new” namely that I don’t know, I have no knowledge whether they are dangerous or not. I believe that the evolutionary mechanism worked very simply in the sense that people who did not fear “new” things, eventually found something new that was dangerous and died.

Now you might say that this guy is advocating racism and xenophobia. In no way I do so. What I’m trying to explain is that the fear of what is new and what is different is something very natural for most humans. This fear is based on the lack of knowledge on the level of danger that “the something new” presents.

Foreigners and immigrants are by their nature “new things”. For people who have never encountered foreigners and immigrants it is normal to have a negative attitude towards these groups simply because they are “new” or “different”. By the same rationale, people who have met foreigners and immigrants and got to know them at least to the level of “they are not posing a threat to my life” have much less negative attitudes towards these groups simply because they are neither “new” nor “potentially dangerous”.

Again, I’m not advocating negative attitudes towards certain group, rather I am saying that the issue is not the attitude in itself, but the lack of knowledge on the group.

In the XXI century it is virtually impossible to have zero knowledge on a certain group of people or on any given topic, even life in the International Space Station. In the information era people get information on various topics with or without their conscious awareness.

In the case of people who don’t encounter other groups of people such as immigrants and foreigners, there is some information available on these groups. Let’s assume that Hans is a gentleman who lives in a relatively isolated community in the East of The Netherlands. He has never encountered a foreigner or immigrant in his community. At the same time, Hans has some knowledge on these groups acquired mainly from mass media and word of mouth.

I assume that this scenario is more than plausible, right? Now, if we think about how Hans got the information on immigrants and foreigners, it is not that hard to imagine that most of the information was more of a negative nature. This is a clear case of “availability heuristic” or as I like to call it “observation bias”.

Usually in the mass media and subsequently in public conscience information on immigrants and foreigners is negative. There is news on how a certain immigrant did something bad, or that a group of immigrants were planning something bad and the police caught them and so on. When this news reaches people who live in relatively isolated communities, it somehow finds a place in public conscience and is established as truth.

The truth is that some immigrants and foreigners do bad things and this gets reported by the media. At the same time, the media never presents the  large majority of immigrants and foreigners who work hard and live normal lives. You will never see a news report about the thousands of immigrants who work 6-7 days per week to support their families. You will never see news about foreigners who hold highly skilled jobs and are part of the “engine” of the economy.

In the case of people living in areas where there are more immigrants or foreigners, this type of news is generally counter-balanced by their own experiences. For example the city I live in, Rotterdam, is the most cosmopolite city in The Netherlands. Dutch Caucasians represent about 50% of the population. If you would live in Rotterdam, you would encounter a lot of people of different ethnicity, different race, different religion and so on. If you would hear news about an immigrant who did something bad this news will blend into your existing knowledge on foreigners and immigrants. Since you live in a very diverse city, you know that immigrants and foreigners are generally OK people who live normal lives and work hard. This implies that your knowledge on foreigners and immigrants would be very little influenced by the news on the immigrant that something bad.

Combine the general lack of knowledge on a social group with the natural “fear of new things” and the little information available in the mass media and public conscience mostly of negative nature and the result is that people in more isolated communities tend to have very negative attitudes towards groups of people about which they know almost nothing.

Lack of knowledge or in other words ignorance is a major source of xenophobia and racism…

It’s not despite not knowing too much about immigrants and foreigners, it’s because some people know very little about these groups that they have very negative attitudes towards them. 

Like it?  Spice Up  Your Business

7 December 2012

Are You Sure You Want This?


One of the fundamental assumptions of rationality and normative economics is that people’s preferences are stable. In other words it goes something like this: we can’t judge what someone likes (prefers) but once we know what she likes (wants) we are sure that the preference will not change.

The classical example is: if you prefer apples over oranges and you prefer oranges over peaches then for sure you will NOT prefer peaches over apples.

This is one of the foundation stones of rational choice theory. At the same time it is a very flawed assumption. Let me give you an example:  do you prefer coffee or beer? Apart from people who don’t drink either one or both of them, for the rest of the human species this is a really tricky question. The reason is that it doesn’t say when or in what context this preference is exhibited.

Let’s assume that you go out for a chat and a drink with an economist. The economist asks you what will you have coffee or beer. Now, it is early evening (say… 7:35 PM) and you had a stressful day. You say that you could use a beer. The economist you went for a chat and drink with will assess that your preference is beer over coffee. You have a couple of beers and have a nice chat with the economist (well… as nice as a chat with an economist can be).

Now, imagine that for some reason you meet the same economist a few days later for breakfast at his place. The economist made a nice omelet and some toast. Because he knows your preference already he offers you a beer.

Is this at least a bit weird? Of course it is. Who has beer for breakfast apart for a few alcoholics?  

This was a bit far-fetched example of violations of the stability preference assumption. A valid critique would be that coffee and beer are not in the same category and to a certain extent it is true. At the same time, people change their preferences even within the same category. Next I will explain how this works depending on how we evaluate the options available for products within the same category.

In both theory and real life there are two ways in which we evaluate options that are available. In the separate evaluation mode we evaluate options one by one. For example you want to buy a couch and you have little money so you decide to buy a second-hand couch.

You search on a website with personal selling ads and you call a guy who is moving and sells his old couch. You next go to the guy’s place to see the couch. You get there and start evaluating the couch. Since there is only one couch in this guy’s apartment you evaluate a single couch at a time. The evaluation can be done in various ways. For example you evaluate the couch by thinking how much you would pay for it or you can give the couch a liking score.

Evaluating a product (couch) in this scenario is called in scientific terms “separate evaluation”. If you go to another guy’s house to see another couch, then you will evaluate the second couch on its own (without making a direct comparison between couches).

Taking the same scenario that you want to buy a second hand couch you decide to a second-hand furniture store. There are two couches for sale on that day and they are placed one next to the other. When you are evaluating each couch you inherently start making comparisons between them. This is called in scientific terms “joint evaluation”.

You might think that this is really trivial and to a certain extent you are right. You might think that this is how things are and that’s it. Indeed, this is how things are and sometimes we evaluate (and subsequently choose) things either one by one or jointly. The most interesting thing is that depending on how we evaluate options (separately or jointly) our evaluations and subsequent choices are likely to change.

Hsee, Loewenstein, Blount and Bazerman summarize in an article from 1999 how these changes of preferences work.

A series of studies have adopted the following framework. In order to establish preferences in the separate evaluation mode there were two groups of random people. Each group was presented with one description of a product (or person); the description was brief and mentioned two attributes of the product. Each person in each group was then asked to evaluate the product by either saying how much they liked it or by saying how much they would pay for it. In order to establish preferences in the joint evaluation mode,  a third group of random people was presented with both products and they were asked to say how much do they like or how much would they pay for each product. Next the preferences (likings or willingness to pay) established in the separate evaluation mode were compared to the preferences established in the joint evaluation mode.

The interesting thing was that the preferences in separate evaluation mode were different (opposite) from the ones in the joint evaluation mode. Let me give you an example:

There are two (second hand) music dictionaries:
Music dictionary 1 has: Number of music terms explained: 10.000 | and Condition: “Like new”.
Music dictionary 2: Number of music terms explained:  20.000 000 | and Condition: “Has a torn cover, but otherwise like new”.

I am not going to ask you what you prefer because now, you evaluate the two dictionaries jointly. However, in the separate evaluation mode (one group evaluated the first music dictionary and another group evaluated the second music dictionary) Music Dictionary number 1 was rated higher (preferred).

As you might have preferred yourself, in the joint evaluation mode Music dictionary number 2 was preferred. Thus preferences in separate evaluation mode were reversed when using the joint evaluation mode.

The rationale behind this is quite simple. If we look at the two attributes that describe the music dictionary we see that there is a relatively easy to evaluate attribute, namely the condition of the book. Everyone that has seen a book knows what “Like new” and “Torn cover” mean.

On the other hand, (apart from music specialists) no one knows how many music terms a music dictionary should have. Both 10.000 and 20.000 are large numbers (at least when it comes to words) so the question “how many terms should a music dictionary have?” is a hard one. In other words, the number of terms a dictionary has is a hard to evaluate attribute.

When evaluating each dictionary separately, the easy to evaluate attribute “condition” is prevalent in the evaluation.

Now, when evaluating the two dictionaries together (jointly) things change a bit. The attribute “condition” remains easy to evaluate. But now the two dictionaries can be compared on the hard to evaluate attribute “number of music terms”. This does not mean that suddenly people find it easy to evaluate the number of music terms a dictionary should have, but everyone knows the rule “the more the better”. When the options can be compared against each other on the hard to evaluate attribute, the weight given to the easy to evaluate attribute diminishes.

The implications of these preference reversals are major. On one hand there is the theoretical implication of a severe violation of one assumption of rationality, namely the stability of preferences. On the other hand there are numerous practical implications.

In many instances in life we are evaluating options in separate evaluation mode. For example we date one person at a time (well, most people do so). We see apartments to live in one by one and we have diner with friends one instance at a time.  
Is this good or bad? The truth is that there is no universal answer. What we can do is to know that when evaluating in separate evaluation mode we tend to focus (base our evaluation) on the easy to evaluate attributes.

In other instances in life we are evaluating things in joint evaluation. For example you want to buy a new TV / Stereo music player and you go to a shop like MediaMarkt or Saturn. When you get to the area where the TV / Stereos are presented you are faced with tens of options and by nature you start evaluating them jointly. Now in the case of a new TV, you will see one that has a very good image and a reasonable price, but right next to it there is a larger TV with a Crystal-Clear image and when you look on the shop’s wall and compare one to another, of course you will like more the larger one with crystal-clear image that costs 800 euros more than the other and takes twice the space in your living room.

When you will get the large TV with crystal clear image into your living room you will only notice that it takes a lot of space and you are 800 Euros short. In your living room you are evaluating in separate evaluation mode, thus now you don’t see the difference that you saw in the store. Moreover in a couple of days you will forget about the difference and you will be stuck with a large TV and with a hole in your wallet.  

There is really no universal answer for which type of evaluation mode – separate or joint – is better. Moreover, it is possible to evaluate options in a mixed way. Going back to the second hand couch example, you can see within 3 hours three different couches at three different guys. Each couch will be evaluated separately, but you will make mental comparisons among them. However, this type of evaluation is not the same as evaluating all three couches at the same time in the same location (joint evaluation).     

What I believe to be the key learning here is that preferences can change depending on the mode in which options are evaluated.

Like it?  Spice Up  Your Business


Note: this post is documented from Hsee, Christopher K., George F. Loewenstein, Sally Blount, and Max H. Bazerman (1999). “Preference reversals between joint and separate evaluations of options: A review and theoretical analysis.” Psychological Bulletin, 125, 576-590.  

6 December 2012

Ups and Downs of Psychological Distance


We have learned in school that distance is spatial. For example young people in Europe learn that Australia is far away and North Africa is close by. These terms “far away” and “close by” are based on the physical distance measured in kilometers (or miles for the British).

Another kind of distance that is embedded in popular culture regards time. We don’t generally perceive time as being physical and if we are asked to associate distance and time it is not exactly intuitive. At the same time, when we refer to events in the past or in the future we add elements that are related to distance. We say that something has happened in the distant past or something will happen in the distant future. Similarly we use expressions like ‘A long time ago” or “Far into the future”.

Psychological distance refers to how a person perceives an object, event, action etc. In essence if we perceive the “thing” in detail then the psychological distance is small. If we perceive the “thing” more abstract then the psychological distance is large.

Psychological distance has several dimensions. Firs, and most obviously, there is the physical dimension of psychological distance. For example Europeans in general have a quite abstract view on Australia, whereas the same people have a pretty detailed view on their own country or city.

Second, there is the time dimension. We perceive “things” now in detail, whereas things that have happened in the distant past (20 years ago) or that will happen in the distant future (50 years from now) in an abstract way.

Third, there is the likelihood dimension. We perceive highly likely events in more detail, while highly unlikely events are perceived very abstract. For example people in The Netherlands perceive a rainy day in its smallest details (it rains a lot in this country), but a rainy day in the Sahara desert is perceived very abstract.

Fourth, there is the hypotheticality dimension. We perceive real things in detail, whereas hypothetical things are perceived more abstract. For example everyone can perceive in detail a cat but a fish that travels in space is perceived more abstractly.

Fifth, there is the social dimension. Each of us perceives things that happen to one’s-self in detail, whereas things that happen to others are perceived more abstractly. Another instance where the social dimension of psychological distance is present is the in-group vs. out-group or in more simple language “us vs. them” situation. For example an Italian will perceive in detail what is happening to another Italian, but will perceive more abstractly what is happening to a Norwegian. It has to be acknowledged that the in-group can be constructed at different levels. For example Italians in comparison with Norwegians is an in-group, but Italians that support A.S. Rome and Italians that support Lazio Rome are sworn enemies.

Psychological distance is very important for decision making and behavior. I will point out a few influences that it has on both decisions and behavior.
First, let’s take the time dimension of psychological distance. One implication of psychological distance is that some of our decisions that are made now have consequences for the future. For example a pupil might decide to play now and not study and the consequence of this decision will be in the distant future when she will apply for college.

Another implication of the time dimension of psychological distance regards plans and decisions about the future. Because we perceive future events more abstract we tend to ignore many factors. The most common flaw of planning is that we overestimate the resources that we will have in the future. As Dan Ariely says “we are all wonderful people in the future”. When we make a decision about the future are inclined to be (very) unrealistic. For example many people say things like “Next year I’ll lose 10 kilos” or “I’ll start studying next Monday”. However, when next year or next Monday comes we tend to find some other more important things to do.

This overestimation of future resources can be used to guide decisions and behavior. For example if you are asked if you would save more for retirement starting today and put away 100 Euros per month, most likely you will say that saving for retirement is important but that you will start saving 110 Euros per month starting next year and not now. Since it is easy to make good decisions that will take place sometime the future, a smart thing to do is to have a binding contract. For example you can sign a contract now to start saving money starting next year… and in such a way that you can’t break the contract when “next year” comes. A very nice case study on this is “Save More Tomorrow” that I will present later.

Going to the hypotheticality dimension of psychological distance, it plays a big role in dishonest behavior. Dan Ariely reports in his book “The honest truth about dishonesty” that cheating to get more “chips” that can be transformed into money was bigger than cheating to get money directly. Now you know why in casinos the gambling is done with chips and not with real money.

From an objective point of view, a chip is the same thing with money. From a subjective point of view, a chip is not actually money, it is hypothetical money. It is one step away from money.
I guess that bank cards have a similar role. It is money but not “real” money. My assumption is that people spend more when using bank cards than when using cash because giving cash feels like spending money, whereas using a bank card does not.

Another implication of the hypotheticality dimension of psychological distance is that it allows ease for doing difficult things. I’ve mentioned in “Despite Working in Very Personal Domains People Talk in a Very ImpersonalManner” that introducing psychological distance, hypotheticality in this case, makes it easier to do things that have a huge psychological load. For example doctors use very impersonal terms when talking about the people they are treating. This allows them to not carry the burden of “cutting up humans”.

Going to the social dimension of psychological distance, one implication is that we tend to view our own problems or issues larger than the ones of others. But this blog is about decision and behavior guiding, not about how to improve one to one relationships.

The in-group vs. out-group (or us vs. them) dimension plays a big role in behavior. We know that people follow or imitate other people’s behavior. At the same time, when there is a clear in-group vs. out-group situation, we are influenced by the behavior of other members of our in-group and we do not copy the behavior of the out-group. In fact it is not unexpected to do exactly the opposite of what the out-group does.

To sum up, psychological distance refers to how close or far we perceive a “thing”. In other words it refers to the level of detail we perceive. There are five dimensions of psychological distance: (1) physical; (2) temporal; (3) likelihood; (4) hypotheticality and (5) social which can be “me” or “someone else” or “Us vs. them”.

The existence (introduction) of psychological distance influences behavior and decisions in both positive and negative ways.

Like it?  Spice Up  Your Business

Effects of Framing Outcomes


Do you know the difference between tax evasion and financial (tax) consulting? The only difference is the point of view. … or at least what the joke says.

Many retailers that have both “brick and mortar” and on-line stores offer a discount for purchases done on-line… however, if we change the point of view when purchasing from a “brick and mortar” shop we are surcharged. The difference is again just the point of view.

Another example is the “classic” label that says “90% fat free”. If we change the point of view the same information can be communicated as “10% fat”.

The interesting thing is that exactly the same information framed in different ways is perceived very differently. Holding on to the example of “90% fat free” vs. “10% fat” you have to admit that the first message is much more appealing than the second.

In order to better understand these differences in perception we have to remember one basic learning of prospect theory, namely that any outcome is perceived as a gain or as a loss in relation with a reference point. At the same time, the reference point can be manipulated. But before explaining more on this, I would like to present you with a classic example for framing effects, namely “The Asian disease problem”.

“The Asian disease problem” goes like this: a dangerous disease is expected to hit a city and scientific estimates are that 600 people will die from this disease. The government can adopt one of two programs to fight this epidemic.

To half of the participants in the study the two programs were presented like this:

Program A: 200 people will be saved for sure.
Program B: there is 1/3 probability that all 600 people will be saved and 2/3 probability that no-one will be saved.

The results were that 72% of people chose Program A and 28% chose Program B.

To the other half of participants in the study the two programs were presented like this:

Program A: 400 people will die for sure.
Program B: there is 1/3 probability no one will die and 2/3 probability that all 600 people will die.

The results were that 22% of people chose Program A and 78% chose Program B.

If we look carefully, we see that in both scenarios programs A and B have the same objective expected value. From the perspective of normative economics there should be indifference between choices or in other words a 50-50 preference. But this is not what is most interesting.

If we look at how the problem was presented to the two groups, we that the two “problems” are identical. In the first problem if program A is adopted for sure 200 people will be saved which by logic implies that the remainder 400 die. In the second problem if program A is adopted for sure 400 people will die which by logic means that 200 people will be saved.

Similarly for program B, if we compare the two problems we see that program B is in essence the same. In the first problem 1/3 probability that all 600 are saved is the same thing with 1/3 probability that no one will die and 2/3 probability that no-one will be saved is exactly the same thing as 2/3 probability that all 600 people will die.

What explain the differences in the choices of participants in the study? The answer is the way in which the information is framed. In more detail, by manipulating the reference point the outcomes are presented as gains or losses. We know that loss aversion is very powerful and makes people risk seeking rather than risk avoiding.

In the first problem by describing program A as “saving” 200 people it is implied that the reference point is all 600 not being saved (dyeing). Thus saving 200 people is perceived as a gain. We already know that in the area of gains people are risk averse (we prefer a sure (smaller) gain than a (larger) risky gain). This means that (most) people will go for the sure thing, thus the 72% preference for program A.

In the second problem by describing program A as “400 people will die for sure” it is implied that the reference point is all 600 not dyeing (being saved). Thus “400 people dyeing” is perceived as a loss. As we know when it comes to loses people are risk seeking (see Loss aversion and its implications). This explains the 78% preference for program B.

Coming back to “90% fat free” being more appealing than “10% fat” the key is the manipulation of the reference point. In the first framing of “90% fat free” the implied (manipulated) reference point is 100% fat. In the second framing of “10% fat” the implied reference point is 0% fat.

Similarly for the X% discount for buying on-line the reference point implied is the price in the “brick and mortar” store. But if we take as a reference point the price in the on-line store then the price in the “brick and mortar” store is Y% higher.

The framing of outcomes (as it is called in scientific terms) has numerous implications for decision design and behavior in general. By manipulating the reference point the same outcome can be presented (framed) as a gain or a loss. We know that people would put in more effort to avoid a loss than they would to obtain a gain. This has numerous implications.

One example is traffic violation penalties. Many countries have a penalty point system which in essence means that when a certain number of violations correlated with their severity are done the driver loses his or her license. In my home country Romania penalty points are “given”. This means that a clean driver starts with zero points and as he or she commits traffic violations (and is caught) there are points added. In our southern neighbor country Bulgaria things are a bit different. If you don’t know the difference between Romania and Bulgaria it is OK, just know that they are two distinct countries that share many similarities. In Bulgaria, a clean driver starts with a number of points (let’s say 20) and as he or she commits traffic violations points are subtracted.

Apparently there is no “real” difference since the basic principle is the same, namely that traffic violations are quantified in penalty points. However, this is not entirely true since subtracting points is perceived as a loss, while adding points is perceived as a gain. Since we know that people would put in more effort to avoid a loss than they would to obtain a gain, I believe that the model applied in Bulgaria is better.

Another implication of framing effects based on loss aversion is that people are more willing to forgo a gain (discount) than they are willing to accept an equivalent loss. Coming back to the “Brick and mortar” and on-line shops, by framing the smaller price in the on-line shop is a “discount” males people more willing to buy from a “brick and mortar” shop because this way they don’t suffer a loss, but forgo a discount. If the “reference” price would be the one in the on-line shop and “brick and mortar” shops would surcharge, then people would be less willing to shop in the later because the surcharge would be perceived as a loss.

To sum up, the same information (on an outcome) can be framed as a gain or as a loss and this has significant implications for decisions and behavior. The “mechanism” behind the framings is the manipulation of the reference point against which the outcome is compared. The rationale behind difference in choices and behaviors due to framing effects is loss aversion





Note: This post is documented from Tversky, A. and D. Kahneman (1981). “The framing of decisions and the psychology of choice.” Science. 211, 453-458.

5 December 2012

Loss Aversion and its Implications


People don’t like to lose things or money. Earlier this year I wrote a post called “It’s hard to say good bye”  which described applied loss aversion. In this post I’ll focus more on the theoretical part of loss aversion.

Loss aversion is the trait that people don’t like losses. Not only we don’t like to lose, we actually hate losing. In order to better understand let’s go in the world of decision making research and focus on a couple of gambles.

In the first gamble is 50% chance of gaining 100 Euros and a 50% chance of losing 100 Euros.

What is your attitude about this gamble? Would you be willing to take it? It’s a 50-50 chance of either winning or losing 100 Euros. Will you take it?

Most likely your answer is a definite “NO” and most people in the world would be giving the same answer. From a strictly rational (normative economics) perspective, this answer is at least weird. The reason is that the objective value of this gamble is 100*0.5 + (-100)*0.5=0. If the objective expected value is zero, then you should be indifferent to the gamble and take it (or at least more people would take it). But you and most (normal) people in the world would not even consider taking this gamble.

The idea of being indifferent to a gamble basically means that you are equally inclined to take and to not taking it.

The second gamble is 50% chance of gaining 110 Euros and a 50% chance of losing 100 Euros.

What is your attitude about this gamble? Would you be willing to take it? It’s a 50-50 chance of either winning 110 Euros or losing 100 Euros. Will you take it?

Most likely your answer is still “No”. At the same time you have already learned how to compute the objective value of the gamble and by applying the very simple formula of 110*0.5 + (-100)*0.5=5. As you can see this gamble has a positive objective expected value. From a normative economics (rational) perspective everyone should be willing to take this gamble, but again most people are not willing to take it.

The third gamble I am proposing is 50% chance of gaining 150 Euros and a 50% chance of losing 100 Euros.

What is your attitude about this gamble? Would you be willing to take it? It’s a 50-50 chance of either winning 150 Euros or losing 100 Euros. Will you take it?

To be honest I don’t know your answer here, but I would guess it is more likely “No” than it is “Yes”. Again you can compute the objective expected value of the gamble by applying the very simple formula of 150*0.5 + (-100)*0.5=25. From a rational perspective everyone should be willing to take this gamble… after all its objective expected value is 25 Euros which is not exactly spare change.

Compared with the second gamble, in the third one I believe that more people would be willing to take the gamble. At the same time, there would still be people who would refuse to take this gamble, although it has an objective value that is positive and significant.

The fourth gamble I am proposing is 50% chance of gaining 200 Euros and a 50% chance of losing 100 Euros.

What is your attitude about this gamble? Would you be willing to take it? It’s a 50-50 chance of either winning 200 Euros or losing 100 Euros. Will you take it?

Again I’m not sure that your answer is “Yes”, but I would assume it to be most likely “yes”. The objective expected value of this gamble is 50 Euros and I guess most people would take the risk of losing 100 Euros for the equal opportunity to gain 200 Euros.

A skeptic would say that in order for people to accept a gamble they need a consistent expected value outcome such as 50 Euros. But let me address this objection through the following (fourth) gamble:
 50% chance of gaining 10 Euros and a 50% chance of gaining nothing.

Would you take this gamble? The answer is most likely “Yes”. At the same time you were not willing to take gamble number two (50% chance of gaining 110 Euros and a 50% chance of losing 100 Euros)  which has the exact same expected value of 5 Euros.

This (apparent) paradox can be explained by looking at the differences between gamble number two and four. Although they have the same objective expected value (5 Euros) they are very different in the sense that gamble number two involves the possibility of a loss, while gamble number four has no potential loss. In the worst case scenario in gamble number four you will not win anything. In the worst case scenario in gamble number two you would lose 100 euros.

This is an illustration of loss aversion. In other words losses hurt more than winnings bring pleasure. To better understand this, imagine that there is a unit of measure for pleasure and pain called “hedon” (from hedonic). A gain of 100 Euros gives a pleasure of X “hedons”. At the same time, a loss of 100 Euros gives a pain (negative pleasure) of –A*X “hedons”. Here “A” is the loss aversion coefficient.

Going back a bit to the first three gambles, we see that the main differences among them consist of the ratio between the amount of gain and amount of loss. In the first gamble the ratio is 1, in other words the amounts of gain and loss are equal (100 Euros). In the second and third gambles this ratio is not 1, namely the amount of gain is bigger than the amount of loss. Still (most) people would not be willing to take these gambles.

Only in the fourth gamble where the ratio is 2, the amount of gain is double that of loss (200 vs. 100   Euros) most people would be willing to take the gamble. This is an illustration of the existence of the “Loss aversion coefficient” called “A” in the previous paragraph. A series of studies have shown that this coefficient is roughly “2” in the sense that a loss looms twice as large as an equal gain.

I have to make a note here. The loss aversion coefficient is ROUGHLY 2, namely this is an average. For some people it is smaller, for others larger. In some situations it is smaller in other situations it is larger. At the same time, there is little variation, so saying that the loss aversion coefficient is “2” is overall a safe estimation.

Loss aversion has two major implications. First, people would put in more effort to avoid a loss than they would in order to obtain an equal gain. For example people would work twice as hard to avoid losing 10 Euros already paid and non-refundable than they would work to gain 10 Euros.
The second implication is that when it comes to losses people become risk seeking.

When it comes to gains people are risk averse. I’ve detailed this in Certainty and Possibility Effects 

To refresh your memory, if you would have to choose between the following two options:

Option A: 100 Euros For sure
Option B: 50% chance of winning 220 Euros and 50% chance of winning nothing.

Most likely your choice is option A. The same goes for most people. At the same time if we look at the second option and compute the objective expected value, it is 110 Euros. In essence you (and most people) prefer a smaller objective expected value for sure than a higher objective expected value with risk involved. This is called “Risk Aversion”.

When it comes to losses, however, things are quite different. Again, you have to make a choice between two options:

Option C: losing 100 Euros for sure
Option D:  50% chance of losing 220 Euros and 50% chance of losing nothing.

Most likely your choice is option D. The same goes for most people. If we look at the objective values of the two options we see that option D has a higher negative expected value (-110 Euros).

If we look at options A&B and C&D, we see that they are very similar. The only difference is that options A&B are gains while options C&D are loses. This (small) difference is the cause of the big differences in preferences about risk. This is the second implication of loss aversion, that it makes people more willing to take risks.

Up to this point I have presented loss aversion and now it is time to acknowledge that there are some limitations to it. I believe that loss aversion (really) exists and it plays a huge role in human life. At the same time, loss aversion is not always present. For example there are studies which have shown that asking people to “Think like a trader” lead to the loss aversion to decline or even disappear.

Another limitation of loss aversion is in the area of costs. For example a shop keeper will not feel a loss when selling one of the products in the shop. After all that is why he has a shop. Similarly the client will not feel the money spent on the product (paid price) as a loss. The key idea is that costs are not losses.

There is one note to be made here. Costs are not losses as long as people expect them. Remember that prospect theory states that any outcome is perceived as a loss or a gain in relation with a reference point. For example if you go to a restaurant for dinner, you expect to pay for the food, for the drinks and to leave a tip (pay for service). None of these costs are perceived as losses since you expect to pay them. But if you see on your bill that the restaurant charged you for using their cutlery and for siting on a chair then you will feel those costs as losses because they are unexpected.

To sum up, loses are perceived twice as larger than equal amount of gains. People work harder to avoid losses and when it comes to loosing people are more inclined to take risks of a higher loss in order to avoid a sure smaller loss. Loss aversion is not always present; thinking like a trader makes loss aversion smaller. Expected costs are not perceived as loses, but unexpected ones are.

Like it?  Spice Up  Your Business