Problem Solving, Decision-Making, and Biases (6B)
Help Questions
MCAT Psychological and Social Foundations › Problem Solving, Decision-Making, and Biases (6B)
A startup’s product team must decide whether to continue investing in a feature that has not improved user retention. They have already spent $250,000 and 6 months of development time. A new analysis projects that, even if completed, the feature is unlikely to increase retention beyond 1%, while an alternative project could plausibly yield a 5% increase with similar future costs. During discussion, several team members argue, “We’ve put too much into this to stop now,” and recommend continuing primarily because of the prior investment. Which decision is most likely given the influence of the sunk cost fallacy?
They will stop the current feature and reallocate resources to the alternative project because only future costs and benefits should guide the choice.
They will continue the current feature largely to justify past expenditures, even though projections suggest a better alternative going forward.
They will randomize between the two options to avoid regret, since both require similar future costs.
They will choose the alternative project because the larger potential gain makes them risk-seeking in the domain of gains.
Explanation
This question examines the sunk cost fallacy in resource allocation. The sunk cost fallacy leads individuals to continue investing in a failing course due to irrecoverable past costs, rather than evaluating future benefits alone. The team has already spent $250,000 and time on a low-yield feature, and members argue to persist to justify these expenditures despite a better alternative. Choice B is correct as it reflects commitment to the current path to avoid 'wasting' prior investments. Choice A is incorrect because it describes rational prospective decision-making, not the irrational influence of sunk costs. To avoid sunk cost errors, focus solely on future costs and benefits in evaluations. Regularly review decisions by asking if you would start the project now given current projections.
A clinic is testing whether clinicians show availability bias when estimating disease likelihood. Over one week, clinicians read two brief case summaries before estimating the probability of Disease X. Case A describes a dramatic, memorable presentation of Disease X that is rare; Case B describes a common presentation of a different disease with similar symptoms. Both cases include base-rate information: Disease X prevalence is 1% in the clinic population. After reading Case A, clinicians estimate Disease X probability at 25% for a new patient with similar symptoms. Based on the vignette, how might availability bias influence the decision?
Clinicians adhere closely to the 1% prevalence because base-rate information is always prioritized over case descriptions.
Clinicians reduce the estimated likelihood of Disease X because rare events are systematically underestimated after exposure.
Clinicians estimate 25% because they assume the researcher wants large numbers, reflecting demand characteristics rather than availability.
Clinicians overweight the vividness of the rare case and inflate the estimated likelihood of Disease X despite the stated base rate.
Explanation
This question tests understanding of availability bias, where people overestimate the likelihood of events that come easily to mind, often due to vividness or recent exposure. Availability bias causes systematic errors in probability estimation because memorable cases are more mentally accessible than statistical base rates. In this scenario, clinicians read a dramatic, memorable case of Disease X and subsequently estimate its probability at 25%, far exceeding the stated 1% base rate. The correct answer (A) demonstrates how the vivid rare case leads clinicians to overweight its likelihood despite knowing the actual prevalence. Answer B incorrectly suggests base rates always dominate, while availability bias specifically describes when vivid examples override statistical information. A key indicator of availability bias is probability estimates that dramatically exceed base rates after exposure to memorable examples.
A team is troubleshooting a software bug that causes intermittent data loss. Early in the investigation, a senior engineer suggests the database is “probably corrupt.” Over the next week, the team tests many hypotheses. When logs show that data loss occurs only after a specific user action, the engineer argues that the action “must be triggering corruption,” and the team continues to focus on database fixes rather than examining the user-action module. Later, a junior engineer finds a reproducible error in the user-action code that explains the data loss without any database corruption. Based on the vignette, which decision outcome is most consistent with belief perseverance?
The team abandons the initial database explanation after the first conflicting log and reallocates effort to the user-action module.
The team continues to treat the initial “database corruption” hypothesis as correct and interprets new information to fit it, despite disconfirming evidence.
The team selects the solution that requires the least immediate effort, regardless of long-term effectiveness, to reduce cognitive load.
The team estimates database corruption is common because they recently heard about a high-profile database failure at another company.
Explanation
This question assesses belief perseverance in problem-solving. Belief perseverance involves maintaining an initial belief despite contradictory evidence, often by reinterpreting new information to fit the original view. The team clings to the database corruption hypothesis, dismissing logs pointing to user actions and focusing efforts accordingly, until disproven. Choice B is correct as it describes persistence with the initial idea amid disconfirming data. Choice C is incorrect because it reflects availability heuristic from recent events, not adherence to a specific belief. To counter belief perseverance, actively test alternative hypotheses. Document and revisit initial assumptions when new evidence emerges to ensure flexibility.
A researcher examines overconfidence bias in diagnostic reasoning. Medical residents read 20 brief cases and provide (1) a diagnosis and (2) a confidence rating from 50% to 100%. One resident answers 12/20 correctly but reports confidence of 90–100% on 18 cases, including many incorrect ones. When asked to review missed cases, the resident states that the cases were “tricky” and that their original reasoning was still “basically right.” Which outcome is most consistent with overconfidence bias in this vignette?
The resident changes answers after seeing the correct key because they assume outcomes were predictable all along.
The resident adopts the diagnosis most frequently used by peers to avoid standing out, independent of case details.
The resident’s confidence closely matches their accuracy, with lower confidence on incorrect cases than correct cases.
The resident reports high certainty that exceeds their actual performance, showing poor calibration between confidence and accuracy.
Explanation
This question evaluates overconfidence bias in professional judgments. Overconfidence bias involves individuals expressing greater certainty in their abilities or decisions than is warranted by their actual performance. The resident reports high confidence on most cases, including incorrect ones, and rationalizes errors without adjusting self-assessment. Choice B is correct as it highlights the mismatch between high certainty and lower accuracy, indicating poor calibration. Choice A fails by suggesting well-calibrated confidence, which contradicts overconfidence's overestimation. To check for overconfidence, compare self-rated certainty to objective outcomes. Regularly solicit feedback and track accuracy to improve calibration in decision-making.
A city council evaluates whether to fund a new traffic policy. During deliberation, one member states, “If we allow protected bike lanes, next we’ll have to remove all street parking, and then businesses will collapse.” No evidence is presented linking bike lanes to business collapse, and the proposal only reallocates one lane on two streets. Other members begin repeating the same chain of outcomes as if it were likely. Which scenario best illustrates a slippery slope bias affecting the decision?
Members judge the policy as safer because they can recall several recent bike accidents reported in local media.
Members assume that a small policy change will inevitably trigger a sequence of extreme negative outcomes without supporting evidence.
Members choose the policy that minimizes losses relative to the status quo because losses loom larger than gains.
Members defer to the first speaker’s position because they believe elected officials are always experts on transportation.
Explanation
This question tests recognition of slippery slope bias in policy debates. Slippery slope bias is a fallacy where a small initial change is assumed to inevitably lead to a chain of extreme, often negative, outcomes without evidence. Council members extrapolate from bike lanes to business collapse, repeating the unsubstantiated sequence despite the proposal's limited scope. Choice A is correct as it captures the unsupported assumption of escalating negative effects. Choice B is incorrect because it describes the availability heuristic, relying on recall of accidents, not a chain of outcomes. To avoid slippery slope thinking, demand evidence for each step in proposed sequences. Evaluate proposals on their direct merits rather than hypothetical extremes.
A hospital committee is evaluating whether to adopt a new triage checklist. The chair strongly favors adoption and asks members to submit one-page memos. Before discussion, the chair circulates three patient stories in which the checklist would have flagged a serious condition earlier. During the meeting, a member mentions a small internal audit showing no change in adverse events after a pilot of the checklist, but the chair responds that the audit “missed the important cases” and returns to the patient stories. Several members then search the audit for methodological flaws but do not request additional outcome data from other units.
Which decision outcome is most consistent with the presence of confirmation bias?
The committee adopts the checklist after focusing on vivid supportive cases and discounting the neutral pilot audit as uninformative.
The committee delays the decision until it collects outcome data from multiple units using a preregistered analysis plan.
The committee adopts the checklist because members believe the chair is an authority, independent of the evidence discussed.
The committee rejects the checklist because the pilot audit shows no effect, regardless of any other information.
Explanation
This question tests recognition of confirmation bias, the tendency to search for, interpret, and recall information that confirms pre-existing beliefs. Confirmation bias manifests when people selectively attend to supporting evidence while dismissing or scrutinizing contradictory evidence. In this scenario, the chair strongly favors adoption and presents vivid patient stories supporting the checklist, then dismisses the neutral audit as having "missed the important cases" while members search for methodological flaws rather than seeking additional data. The correct answer (B) describes the classic confirmation bias outcome: adopting the checklist after focusing on supportive cases and discounting contradictory evidence as uninformative. Answer A describes unbiased decision-making with proper methodology, C assumes the audit would be decisive (contradicting the bias), and D introduces authority bias which isn't the primary mechanism here. To spot confirmation bias, look for asymmetric treatment of evidence based on whether it supports the preferred conclusion.
A public health team must choose one of two messages to increase vaccination appointments. Message 1: “If you vaccinate, you will reduce your chance of infection by 60%.” Message 2: “If you do not vaccinate, you increase your chance of infection by 150%.” Both statements are mathematically equivalent given the same baseline risk. In a pilot, Message 2 produces more bookings. Which interpretation is most consistent with loss aversion influencing the decision?
People respond more to Message 2 because repeated exposure increases liking of the message, indicating mere exposure effects.
People respond more when outcomes are framed as avoiding losses, so the increased bookings under Message 2 reflect stronger motivation to prevent a negative outcome.
People respond more to Message 2 because it contains a larger number, indicating anchoring on the percent value rather than loss-related processing.
People respond more to Message 2 because they prefer options with uncertain outcomes, indicating risk-seeking regardless of framing.
Explanation
This question tests understanding of loss aversion, where potential losses have greater psychological impact than equivalent gains. Loss aversion explains why people are more motivated to avoid negative outcomes than to achieve positive outcomes of equal magnitude, leading to stronger responses to loss-framed messages. In this scenario, Message 2 frames non-vaccination as increasing infection risk (a loss), while Message 1 frames vaccination as reducing risk (a gain), though both convey equivalent information. The correct answer (D) correctly identifies that increased bookings under loss framing reflect stronger motivation to prevent negative outcomes. Answer B incorrectly attributes the effect to anchoring on numbers rather than loss/gain framing. To identify loss aversion, look for stronger responses to avoiding losses than achieving equivalent gains.
In a lab study of anchoring, adult participants are told they will negotiate a used laptop price with a seller. Before making an offer, each participant sees a “suggested market price” that is randomly assigned. The laptop’s actual condition and specs are held constant across participants, and all participants receive the same objective comparison sheet (typical range: $450–$550). Participants then write (1) their first offer and (2) their estimate of the laptop’s fair value. The researcher notes that participants often report using the comparison sheet, but their first offer still varies systematically with the suggested price. Which decision outcome is most consistent with the presence of anchoring in this vignette?
Participants’ first offers converge near $500 regardless of the suggested price, because the comparison sheet eliminates bias.
Participants make lower first offers after reading negative reviews because they selectively attend to unfavorable information about the laptop.
Participants shown a $650 suggested price make higher first offers and higher “fair value” estimates than those shown a $350 suggested price, even with the same comparison sheet.
Participants with prior laptop-buying experience make higher first offers than novices, independent of the suggested price, due to expertise effects.
Explanation
This question tests understanding of anchoring bias, where initial numerical information disproportionately influences subsequent judgments. Anchoring occurs when people rely too heavily on the first piece of information encountered (the anchor) when making decisions, even when that information is arbitrary or irrelevant. In this scenario, the randomly assigned "suggested market price" serves as an anchor that systematically influences participants' offers despite having access to objective comparison data. The correct answer (B) demonstrates anchoring because participants exposed to the $650 anchor make higher offers and fair value estimates than those exposed to the $350 anchor, showing the anchor's persistent influence. Answer A incorrectly suggests the comparison sheet eliminates bias, when anchoring typically persists even with objective information available. A key check for anchoring is whether judgments systematically vary with an arbitrary initial value despite access to better information.
In an observational study of investing behavior, participants are asked to choose between (Option 1) keeping a stock they bought last year and (Option 2) selling it to buy a diversified index fund. The stock has declined 25% since purchase, and the participant’s written rationale emphasizes, “I’ve already put so much into it; selling would make that loss real.” Participants are reminded that both options have the same expected return over the next year based on provided projections. Which decision is most likely given the influence of the sunk cost fallacy?
Switch to the index fund because the participant correctly ignores past costs and focuses only on expected future returns.
Keep the stock because prior investment is treated as a reason to continue, despite projections indicating no advantage.
Sell the stock to avoid future regret, because anticipated regret eliminates the impact of past investments.
Keep the stock because the participant assumes the stock is “due” to rebound, reflecting a belief in random streak correction.
Explanation
This question tests understanding of the sunk cost fallacy, where past investments inappropriately influence current decisions even when those costs cannot be recovered. The sunk cost fallacy violates rational decision-making by considering irrelevant past expenditures rather than focusing solely on future outcomes. In this scenario, the participant explicitly states concern about "making the loss real" by selling, treating the past investment as a reason to continue holding despite equal expected future returns. The correct answer (B) accurately describes keeping the stock because prior investment is treated as justification to continue, even when projections show no advantage. Answer C incorrectly suggests rational behavior ignoring sunk costs, which contradicts the participant's stated reasoning. To identify sunk cost fallacy, look for decisions justified by past investments rather than future expectations.
A public health team evaluates whether to expand a screening program. Two briefs describe the same outcomes in different frames:
Brief 1: “With expansion, 90 out of 100 high-risk patients will be correctly reassured they do not have the disease.”
Brief 2: “With expansion, 10 out of 100 high-risk patients will receive a false alarm and require follow-up testing.”
In pilot meetings, administrators exposed to Brief 1 show higher support for expansion than those exposed to Brief 2, even though both statements describe the same tradeoff. Based on the vignette, how might the framing effect influence the decision?
Support will depend only on the base rate of disease in the community, regardless of how outcomes are described.
Support will decrease when administrators are asked to justify their decision publicly, due to social desirability bias.
Support will be highest when administrators focus on a single memorable false-alarm case, because vivid events always override statistics.
Support will be higher when outcomes are presented as gains (correct reassurance) than when equivalent outcomes are presented as losses (false alarms).
Explanation
This question evaluates the framing effect in policy decisions. The framing effect refers to how equivalent information presented as gains or losses influences preferences, with gains often perceived more favorably than losses. In the vignette, the same screening outcomes are framed as correct reassurances (gains) in Brief 1 and false alarms (losses) in Brief 2, leading to higher support for the gain-framed version. Choice A is correct because it explains the increased support for expansion when outcomes emphasize gains over losses. Choice D fails as it describes the availability heuristic, not framing, by focusing on vivid events overriding statistics. To identify framing effects, compare reactions to positively versus negatively worded equivalent options. Always reframe information in both ways to ensure decisions are not unduly swayed by presentation.