News:

PD.com: The culmination of the 'Ted Stevens Plan'

Main Menu

List of cognitive biases

Started by Cain, November 01, 2010, 02:19:15 PM

Previous topic - Next topic

Cain

The Halo Effect

In a sentence: people who are good-looking are assumed to be inherently better and treated more favourably than ugly looking people.

The Science:

Quote from: Robert Cialdini, Influence: Science and PracticeResearch has shown that we automatically assign to good-looking individuals such favorable traits as talent, kindness, honesty, and intelligence (for a review of this evidence, see Eagly, Ashmore, Makhijani, & Longo, 1991).  Furthermore, we make these judgments without being aware that physical attractiveness plays a role in the process.  Some consequences of this unconscious assumption that "good-looking equals good" scare me.  For example, a study of the 1974 Canadian federal elections found that attractive candidates received more than two and a half times as many votes as unattractive candidates (Efran & Patterson, 1976).  Despite such evidence of favoritism toward handsome politicians, follow-up research demonstrated that voters did not realize their bias.  In fact, 73 percent of Canadian voters surveyed denied in the strongest possible terms that their votes had been influenced by physical appearance; only 14 percent even allowed for the possibility of such influence (Efran & Patterson, 1976).  Voters can deny the impact of attractiveness on electability all they want, but evidence has continued to confirm its troubling presence (Budesheim & DePaola, 1994).

A similar effect has been found in hiring situations.  In one study, good grooming of applicants in a simulated employment interview accounted for more favorable hiring decisions than did job qualifications - this, even though the interviewers claimed that appearance played a small role in their choices (Mack & Rainey, 1990).  The advantage given to attractive workers extends past hiring day to payday.  Economists examining U.S. and Canadian samples have found that attractive individuals get paid an average of 12-14 percent more than their unattractive coworkers (Hammermesh & Biddle, 1994).

Equally unsettling research indicates that our judicial process is similarly susceptible to the influences of body dimensions and bone structure.  It now appears that good-looking people are likely to receive highly favorable treatment in the legal system (see Castellow, Wuensch, & Moore, 1991; and Downs & Lyons, 1990, for reviews).  For example, in a Pennsylvania study (Stewart, 1980), researchers rated the physical attractiveness of 74 separate male defendants at the start of their criminal trials.  When, much later, the researchers checked court records for the results of these cases, they found that the handsome men had received significantly lighter sentences.  In fact, attractive defendants were twice as likely to avoid jail as unattractive defendants.  In another study - this one on the damages awarded in a staged negligence trial - a defendant who was better looking than his victim was assessed an average amount of $5,623; but when the victim was the more attractive of the two, the average compensation was $10,051.  What's more, both male and female jurors exhibited the attractiveness-based favoritism (Kulka & Kessler, 1978).

Other experiments have demonstrated that attractive people are more likely to obtain help when in need (Benson, Karabenic, & Lerner, 1976) and are more persuasive in changing the opinions of an audience (Chaiken, 1979)...

Cain

The Conjunction Fallacy

In A Sentence:  People believe specific conditions are more likely than general ones.

The Science:

Quote from: WikipediaThe conjunction fallacy is a logical fallacy that occurs when it is assumed that specific conditions are more probable than a single general one.

The most oft-cited example of this fallacy originated with Amos Tversky and Daniel Kahneman:

QuoteLinda is 31 years old, single, outspoken, and very bright. She majored in philosophy. As a student, she was deeply concerned with issues of discrimination and social justice, and also participated in anti-nuclear demonstrations.

Which is more probable?

1. Linda is a bank teller.
2. Linda is a bank teller and is active in the feminist movement.

85% of those asked chose option 2. However the probability of two events occurring together (in "conjunction") is always less than or equal to the probability of either one occurring alone [...]

For example, even choosing a very low probability of Linda being a bank teller, say Pr(Linda is a bank teller) = 0.05 and a high probability that she would be a feminist, say Pr(Linda is a feminist) = 0.95, then, assuming independence, Pr(Linda is a bank teller and Linda is a feminist) = 0.05 × 0.95 or 0.0475, lower than Pr(Linda is a bank teller).

Tversky and Kahneman argue that most people get this problem wrong because they use the representativeness heuristic to make this kind of judgment: Option 2 seems more "representative" of Linda based on the description of her, even though it is clearly mathematically less likely.

(As a side issue, some people may simply be confused by the difference between "and" and "or". Such confusions are often seen in those who have not studied logic, and the probability of such sentences using "or" instead of "and" is completely different. They may infer sentence #1 assumes Linda is necessarily not active in the feminist movement.)

Many other demonstrations of this error have been studied. In another experiment, for instance, policy experts were asked to rate the probability that the Soviet Union would invade Poland, and the United States would break off diplomatic relations, all in the following year. They rated it on average as having a 4% probability of occurring. Another group of experts was asked to rate the probability simply that the United States would break off relations with the Soviet Union in the following year. They gave it an average probability of only 1%. Researchers argued that a detailed, specific scenario seemed more likely because of the representativeness heuristic, but each added detail would actually make the scenario less and less likely. In this way it could be similar to the misleading vividness or slippery slope fallacies, though it is possible that people underestimate the general possibility of an event occurring when not given a plausible scenario to ponder.

Triple Zero

Ah, you're reading that book too? Think I came across the title via some Reddit thread months ago, downloaded it from gigapedia and read a good chunk of it during my holiday in Italy.

Fucking brilliant and scary interesting it is, even though a good many of the examples are very well known to most people vaguely interested in this sort of matter (Milgram, etc), the eye-opener Cialdini manages to pull off for me is by providing the numbers, and especially the numbers of people beforehand that would say not to be affected, to dispel the myth of "Ah, but I surely wouldn't fall for that so easily ...". I still can't quite entirely imagine how I would fall for certain tricks or schemes, but the numbers don't lie and me not being able to imagine it doesn't change that much.


edit : oh it seems you have more plans for this thread than quoting Cialdini, then I'm being a littlebit OT, please do carry on!
Ex-Soviet Bloc Sexual Attack Swede of Tomorrow™
e-prime disclaimer: let it seem fairly unclear I understand the apparent subjectivity of the above statements. maybe.

INFORMATION SO POWERFUL, YOU ACTUALLY NEED LESS.

Cain

The Dunning–Kruger effect

In A Sentence:  Underskilled people overrate their abilities, whereas highly skilled people underrate their abilities.

The Science:

Quote from: WikipediaThe Dunning–Kruger effect is a cognitive bias in which an unskilled person makes poor decisions and reaches erroneous conclusions, but their incompetence denies them the metacognitive ability to realize their mistakes. The unskilled therefore suffer from illusory superiority, rating their own ability as above average, much higher than it actually is, while the highly skilled underrate their abilities, suffering from illusory inferiority. This leads to the situation in which less competent people rate their own ability higher than more competent people. It also explains why actual competence may weaken self-confidence: because competent individuals falsely assume that others have an equivalent understanding. "Thus, the miscalibration of the incompetent stems from an error about the self, whereas the miscalibration of the highly competent stems from an error about others."

The Dunning–Kruger effect was put forward by Justin Kruger and David Dunning. Similar notions have been expressed – albeit less scientifically – for some time. Dunning and Kruger themselves quote Charles Darwin ("Ignorance more frequently begets confidence than does knowledge") and Bertrand Russell ("One of the painful things about our time is that those who feel certainty are stupid, and those with any imagination and understanding are filled with doubt and indecision."). W.B. Yeats put it concisely thus: "The best lack all conviction, while the worst / Are full of passionate intensity." The Dunning–Kruger effect is not, however, concerned narrowly with high-order cognitive skills (much less their application in the political realm during a particular era, which is what Russell was talking about.) Nor is it specifically limited to the observation that ignorance of a topic is conducive to overconfident assertions about it, which is what Darwin was saying. Indeed, Dunning et al. cite a study saying that 94% of college professors rank their work as "above average" (relative to their peers), to underscore that the highly intelligent and informed are hardly exempt. Rather, the effect is about paradoxical defects in perception of skill, in oneself and others, regardless of the particular skill and its intellectual demands, whether it is chess, playing golf or driving a car.

The hypothesized phenomenon was tested in a series of experiments performed by Justin Kruger and David Dunning, then both of Cornell University. Kruger and Dunning noted earlier studies suggesting that ignorance of standards of performance is behind a great deal of incompetence. This pattern was seen in studies of skills as diverse as reading comprehension, operating a motor vehicle, and playing chess or tennis.

Kruger and Dunning proposed that, for a given skill, incompetent people will:

   1. tend to overestimate their own level of skill;
   2. fail to recognize genuine skill in others;
   3. fail to recognize the extremity of their inadequacy;
   4. recognize and acknowledge their own previous lack of skill, if they can be trained to substantially improve.

Dunning has since drawn an analogy ("the anosognosia of everyday life") to a condition in which a person who suffers a physical disability because of brain injury seems unaware of or denies the existence of the disability, even for dramatic impairments such as blindness or paralysis.

Kruger and Dunning set out to test these hypotheses on human subjects consisting of Cornell undergraduates who were registered in various psychology courses. In a series of studies, they examined self-assessment of logical reasoning skills, grammatical skills, and humor. After being shown their test scores, the subjects were again asked to estimate their own rank, whereupon the competent group accurately estimated their rank, while the incompetent group still overestimated their own rank. As Dunning and Kruger noted,

QuoteAcross four studies, the authors found that participants scoring in the bottom quartile on tests of humor, grammar, and logic grossly overestimated their test performance and ability. Although test scores put them in the 12th percentile, they estimated themselves to be in the 62nd.

Meanwhile, people with true knowledge tended to underestimate their relative competence. Roughly, participants who found tasks to be relatively easy erroneously assumed, to some extent, that the tasks must also be easy for others.

A follow-up study suggests that grossly incompetent students improve both their skill level and their ability to estimate their class rank only after extensive tutoring in the skills they had previously lacked.

In 2003 Dunning and Joyce Ehrlinger, also of Cornell University, published a study that detailed a shift in people's views of themselves influenced by external cues. Participants in the study (Cornell University undergraduates) were given tests of their knowledge of geography, some intended to positively affect their self-views, some intended to affect them negatively. They were then asked to rate their performance, and those given the positive tests reported significantly better performance than those given the negative.

Daniel Ames and Lara Kammrath extended this work to sensitivity to others, and the subjects' perception of how sensitive they were. Other research has suggested that the effect is not so obvious and may be due to noise and bias levels.

Dunning, Kruger, and coauthors' latest paper on this subject comes to qualitatively similar conclusions to their original work, after making some attempt to test alternative explanations. They conclude that the root cause is that, in contrast to high performers, "poor performers do not learn from feedback suggesting a need to improve."

Studies on the Dunning–Kruger effect tend to focus on American test subjects. Similar studies on European subjects show marked muting of the effect; studies on some East Asian subjects suggest that something like the opposite of the Dunning–Kruger effect operates on self-assessment and motivation to improve:

QuoteRegardless of how pervasive the phenomenon is, it is clear from Dunning's and others' work that many Americans, at least sometimes and under some conditions, have a tendency to inflate their worth. It is interesting, therefore, to see the phenomenon's mirror opposite in another culture. In research comparing North American and East Asian self-assessments, Heine of the University of British Columbia finds that East Asians tend to underestimate their abilities, with an aim toward improving the self and getting along with others.

Cain

Quote from: Triple Zero on November 01, 2010, 02:28:48 PM
Ah, you're reading that book too? Think I came across the title via some Reddit thread months ago, downloaded it from gigapedia and read a good chunk of it during my holiday in Italy.

Fucking brilliant and scary interesting it is, even though a good many of the examples are very well known to most people vaguely interested in this sort of matter (Milgram, etc), the eye-opener Cialdini manages to pull off for me is by providing the numbers, and especially the numbers of people beforehand that would say not to be affected, to dispel the myth of "Ah, but I surely wouldn't fall for that so easily ...". I still can't quite entirely imagine how I would fall for certain tricks or schemes, but the numbers don't lie and me not being able to imagine it doesn't change that much.


edit : oh it seems you have more plans for this thread than quoting Cialdini, then I'm being a littlebit OT, please do carry on!

It's OK.  I've only really started on Cialdini myself, but I've heard from several people he is one of the few to have studied and written about influence in a truly scientific manner.  If you want to discuss him in another thread, when I am finished, I'd be happy to.

Cain

The Loss Aversion Bias

also known as

Escalation of Commitment Bias/The Sunken Cost Fallacy/Irrational Escalation Bias

In A Sentence: we are more hard-wired to avoid loss than we are to appreciate gains, explaining a number of irrational behaviours people display.

The Science:

Quote from: from Wikipedia
In economics and decision theory, loss aversion refers to people's tendency to strongly prefer avoiding losses to acquiring gains. Some studies suggest that losses are twice as powerful, psychologically, as gains. Loss aversion was first convincingly demonstrated by Amos Tversky and Daniel Kahneman.

This leads to risk aversion when people evaluate a possible gain; since people prefer avoiding losses to making gains. This explains the curvilinear shape of the prospect theory utility graph in the positive domain. Conversely people strongly prefer risks that might possibly mitigate a loss (called risk seeking behavior).

Loss aversion may also explain sunk cost effects.

Loss aversion implies that one who loses $100 will lose more satisfaction than another person will gain satisfaction from a $100 windfall. In marketing, the use of trial periods and rebates try to take advantage of the buyer's tendency to value the good more after he incorporates it in the status quo.

[...]

In economics and business decision-making, sunk costs are retrospective (past) costs that have already been incurred and cannot be recovered. Sunk costs are sometimes contrasted with prospective costs, which are future costs that may be incurred or changed if an action is taken. Both retrospective and prospective costs may be either fixed (that is, they are not dependent on the volume of economic activity, however measured) or variable (dependent on volume).

In traditional microeconomic theory, only prospective (future) costs are relevant to an investment decision. Traditional economics proposes that an economic actor not let sunk costs influence one's decisions, because doing so would not be rationally assessing a decision exclusively on its own merits. The decision-maker may make rational decisions according to their own incentives; these incentives may dictate different decisions than would be dictated by efficiency or profitability, and this is considered an incentive problem and distinct from a sunk cost problem.

Evidence from behavioral economics suggests this theory fails to predict real-world behavior. Sunk costs greatly affect actors' decisions, because humans are inherently loss-averse and thus normally act irrationally when making economic decisions.

Sunk costs should not affect the rational decision-maker's best choice. However, until a decision-maker irreversibly commits resources, the prospective cost is an avoidable future cost and is properly included in any decision-making processes. For example, if you are considering pre-ordering movie tickets, but have not actually purchased them yet, the cost remains avoidable. If the price of the tickets rises to an amount that requires you to pay more than the value you place on them, the change in prospective cost should be figured into the decision-making, and the decision should be reevaluated.

[...]

Escalation of commitment was first described by Barry M. Staw in his 1976 paper, "Knee deep in the big muddy: A study of escalating commitment to a chosen course of action". More recently the term sunk cost fallacy has been used to describe the phenomenon where people justify increased investment in a decision, based on the cumulative prior investment, despite new evidence suggesting that the cost, starting today, of continuing the decision outweighs the expected benefit. Such investment may include money, time, or — in the case of military strategy — human lives. The phenomenon and the sentiment underlying it are reflected in such proverbial images as Throwing good money after bad and In for a dime, in for a dollar (or In for a penny, in for a pound).

The term is also used to describe poor decision-making in business, government, information systems in general, software project management in particular, politics, and gambling. The term has been used to describe the United States commitment to military conflicts including Vietnam in the 1960s - 1970s and in Iraq in the 2000s, where dollars spent and lives lost justify continued involvement.

Alternatively, irrational escalation (sometimes referred to as irrational escalation of commitment or commitment bias) is a term frequently used in psychology, philosophy, economics, and game theory to refer to a situation in which people can make irrational decisions based upon rational decisions in the past or to justify actions already taken. Examples are frequently seen when parties engage in a bidding war; the bidders can end up paying much more than the object is worth to justify the initial expenses associated with bidding (such as research), as well as part of a competitive instinct.

Gambling is an especially good example of how this works.  We want to make our losses "worth something" and so we stick with whatever activity we are engaged in, in hope of a larger, future payoff.  We don't want to make our sacrifices "meaningless".