New SAT Reading : Graphs and Tables

Study concepts, example questions & explanations for New SAT Reading

varsity tutors app store varsity tutors android store

Example Questions

Example Question #1 : Relating Graphics To The Passage

This passage is adapted from Adam K. Fetterman and Kai Sassenberg, “The Reputational Consequences of Failed Replications and Wrongness Admission among Scientists", first published in December 2015 by PLOS ONE.

We like to think of science as a purely rational. However, scientists are human and often identify with their work. Therefore, it should not be controversial to suggest that emotions are involved in replication discussions. Adding to this inherently emotionally volatile situation, the recent increase in the use of social media and blogs by scientists has allowed for instantaneous, unfiltered, and at times emotion-based commentary on research. Certainly social media has the potential to lead to many positive outcomes in science–among others, to create a more open science. To some, however, it seems as if this ease of communication is also leading to the public tar and feathering of scientists. Whether these assertions are true is up for debate, but we assume they are a part of many scientists’ subjective reality. Indeed, when failed replications are discussed in the same paragraphs as questionable research practices, or even fraud, it is hard to separate the science from the scientist. Questionable research practices and fraud are not about the science; they are about the scientist. We believe that these considerations are at least part of the reason that we find the overestimation effect that  we do, here.

[Sentence 1] Even so, the current data suggests that while many are worried about how a failed replication would affect their reputation, it is probably not as bad as they think. Of course, the current data cannot provide evidence that there are no negative effects; just that the negative impact is overestimated. That said, everyone wants to be seen as competent and honest, but failed replications are a part of science. In fact, they are how science moves forward!

[Sentence 2] While we imply that these effects may be exacerbated by social media, the data cannot directly speak to this. However, any one of a number of cognitive biases may add support to this assumption and explain our findings. For example, it may be that a type of availability bias or pluralistic ignorance of which the more vocal and critical voices are leading individuals to judge current opinions as more negative than reality. As a result, it is easy to conflate discussions about direct replications with “witch- hunts” and overestimate the impact on one’s own reputation. Whatever the source may be, it is worth looking at the potential negative impact of social media in scientific conversations.

[Sentence 3] If the desire is to move science forward, scientists need to be able to acknowledge when they are wrong. Theories come and go, and scientists learn from their mistakes (if they can even be called “mistakes”). This is the point of science. However, holding on to faulty ideas flies in the face of the scientific method. Even so, it often seems as if scientists have a hard time admitting wrongness. This seems doubly true when someone else fails to replicate a scientist’s findings. Even so, it often seems as if scientists have a hard time admitting wrongness. This seems doubly true when someone else fails to replicate a scientist’s findings. In some cases, this may be the proper response. Just as often, though, it is not. In most cases, admitting wrongness will have relatively fewer ill effects on one’s reputation than not admitting and it may be better for reputation. It could also be that wrongness admission repairs damage to reputation.

It may seem strange that others consider it less likely that questionable research practices, for example, were used when a scientist admits that they were wrong. [Sentence 4] However, it does make sense from the standpoint that wrongness admission seems to indicate honesty. Therefore, if one is honest in one domain, they are likely honest in other domains. Moreover, the refusal to admit might indicate to others that the original scientist is trying to cover something up. The lack of significance of most of the interactions in our study suggests that it even seems as if scientists might already realize this. Therefore, we can generally suggest that scientists admit they are wrong, but only when the evidence suggests they should.

The chart below maps how scientists view others' work (left) and how they suspect others will view their own work (right) if the researcher (the scientist or another, depending on the focus) admitted to engaging in questionable research practices.

Screen shot 2020 08 20 at 3.28.58 pm

Adapted from Fetterman & Sassenberg, "The Reputational Consequences of Failed Replications and Wrongness Admission among Scientists." December 9, 2015, PLOS One.

Which statement from the passage is most directly supported by the information provided in the graph?

Possible Answers:

Sentence 2 ("While we ... findings")

Sentence 4 ("However, it ... domains")

Sentence 1 ("Even so ... overestimated")

Sentence 3 ("If the ... mistakes")")

Correct answer:

Sentence 1 ("Even so ... overestimated")

Explanation:

This question asks you to link the graph to the set of lines within the passage that it best supports. Since you are looking for direct evidence here, the best course of action is to consider what you're being told within the graph and then use process of elimination to link that to one of the statements given. The graph suggests that scientists viewed others who admitted to wrongdoing with less suspicion than they viewed people who did not admit to wrongdoing. However, scientists believed that others would view them as less trustworthy if they admitted to wrongdoing.

Sentence 1 states that while scientists are worried about what will happen to their reputations if they admit to wrongdoing but are probably worrying more about it than is warranted. This matches the information given - scientists viewed others who admitted to wrongdoing with less suspicion than those who didn't admit to it (but who did something wrong).

Sentence 2 deals with the role of social media within this debate. Since the graph gives no information about the effect of social media, so this option can be eliminated.

Sentence 3 gives a potential course of action - that scientists should admit to their mistakes so that science can move forward. However, this isn't mentioned in the graph, so this option can be eliminated.

Sentence 4 gives an explanation for why the data in the graph may have occurred, but it isn't supported by the graph itself. (Be careful to not get the causality backwards here - you are looking for a statement that is supported by the graph, not the other way around!)

Example Question #1 : Graphs And Tables

The following passage and corresponding figure are from Emilie Reas. "How the brain learns to read: development of the “word form area”", PLOS Neuro Community, 2018.

The ability to recognize, process and interpret written language is a uniquely human skill that is acquired with remarkable ease at a young age. But as anyone who has attempted to learn a new language will attest, the brain isn’t “hardwired” to understand written language. In fact, it remains somewhat of a mystery how the brain develops this specialized ability. Although researchers have identified brain regions that process written words, how this selectivity for language develops isn’t entirely clear. 

Earlier studies have shown that the ventral visual cortex supports recognition of an array of visual stimuli, including objects, faces, and places. Within this area, a subregion in the left hemisphere known as the “visual word form area” (VWFA) shows a particular selectivity for written words. However, this region is characteristically plastic. It’s been proposed that stimuli compete for representation in this malleable area, such that “winner takes all” depending on the strongest input. That is, how a site is ultimately mapped is dependent on what it’s used for in early childhood. But this idea has yet to be confirmed, and the evolution of specialized brain areas for reading in children is still poorly understood.

In their study, Dehaene-Lambertz and colleagues monitored the reading abilities and brain changes of ten six-year old children to track the emergence of word specialization during a critical development  period. Over the course of their first school-year, children were assessed every two months with reading evaluations and functional MRI while viewing words and non-word images (houses, objects, faces, bodies). As expected, reading ability improved over the year of first grade, as demonstrated by increased reading speed, word span, and phoneme knowledge, among other measures.

Even at this young age, when reading ability was newly acquired, words evoked widespread left-lateralized brain activation. This activity increased over the year of school, with the greatest boost occurring after just the first few months. Importantly, there were no similar activation increases in response to other stimuli, confirming that these adaptations were specific to reading ability, not a general effect of development or education. Immediately after school began, the brain volume specialized for reading also significantly increased. Furthermore, reading speed was associated with greater activity, particularly in the VWFA. The researchers found that activation patterns to words became more reliable with learning. In contrast, the patterns for other categories remained stable, with the exception of numbers, which may reflect specialization for symbols (words and numbers) generally, or correlation with the simultaneous development of mathematics skills.

What predisposes one brain region over another to take on this specialized role for reading words? Before school, there was no strong preference for any other category in regions that would later become word-responsive. However, brain areas that were destined to remain “non-word” regions showed more stable responses to non-word stimuli even before learning to read. Thus, perhaps the brain takes advantage of unoccupied real-estate to perform the newly acquired skill of reading.

These findings add a critical piece to the puzzle of how reading skills are acquired in the developing child brain. Though it was already known that reading recruits a specialized brain region for words, this study reveals that this occurs without changing the organization of areas already specialized for other functions. The authors propose an elegant model for the developmental brain changes underlying reading skill acquisition. In the illiterate child, there are adjacent columns or patches of cortex either tuned to a specific category, or not yet assigned a function. With literacy, the free subregions become tuned to words, while the previously specialized subregions remain stable.

The rapid emergence of the word area after just a brief learning period highlights the remarkable plasticity of the developing cortex. In individuals who become literate as adults, the same VWFA is present. However, in contrast to children, the relation between reading speed and activation in this area is weaker in adults, and a single adult case-study by the authors showed a much slower, gradual development of the VWFA over a prolonged learning period of several months. Whatever the reason, this region appears primed to rapidly adopt novel representations of symbolic words, and this priming may peak at a specific period in childhood. This finding underscores the importance of a strong education in youth. The authors surmise that “the success of education might also rely on the right timing to benefit from the highest neural plasticity. Our results might also explain why numerous academic curricula, even in ancient civilizations, propose to teach reading around seven years.”

The figure below shows different skills mapped to different sites in the brain before schooling and then with and without school. Labile sites refer to sites that are not currently mapped to a particular skill.

Screen shot 2020 08 20 at 3.23.45 pm

Does the information in the figure support the “winner takes all” theory?

Possible Answers:

Yes, because it shows that each cortical column is only attuned to a single skill.

No, because it shows different patterns in children with and without schooling.

No, because it only addresses what skills are represented in each region, not the representation of stimuli.

Yes, because it shows that in children without schooling that faces are better represented within the given subregion than tools are.

Correct answer:

No, because it only addresses what skills are represented in each region, not the representation of stimuli.

Explanation:

This question requires two pieces of information. First, it requires you to understand the idea behind the "winner takes all" theory. The theory states that the function for which a site is mapped depends on what it is used on in early childhood. Second, you need to understand whether the information presented in the figure matches this statement. You are shown that before schooling, there are a set of "labile" sites (unmapped sites) and sites that are keyed to different skills like tools, faces, and houses. With schooling, some of the labile sites become mapped to words. Without schooling, those same labile sites become mapped to one of the skills already represented. However, the figure does not show how the labile sites were used in early childhood, only how information was later mapped onto the brain. You therefore cannot conclude that there is support for "winner takes all" since there is no discussion of the representation of stimuli.

Example Question #2 : Relating Graphics To The Passage

The following passage and corresponding figure are from Emilie Reas. "How the brain learns to read: development of the “word form area”", PLOS Neuro Community, 2018.

The ability to recognize, process and interpret written language is a uniquely human skill that is acquired with remarkable ease at a young age. But as anyone who has attempted to learn a new language will attest, the brain isn’t “hardwired” to understand written language. In fact, it remains somewhat of a mystery how the brain develops this specialized ability. Although researchers have identified brain regions that process written words, how this selectivity for language develops isn’t entirely clear. 

Earlier studies have shown that the ventral visual cortex supports recognition of an array of visual stimuli, including objects, faces, and places. Within this area, a subregion in the left hemisphere known as the “visual word form area” (VWFA) shows a particular selectivity for written words. However, this region is characteristically plastic. It’s been proposed that stimuli compete for representation in this malleable area, such that “winner takes all” depending on the strongest input. That is, how a site is ultimately mapped is dependent on what it’s used for in early childhood. But this idea has yet to be confirmed, and the evolution of specialized brain areas for reading in children is still poorly understood.

In their study, Dehaene-Lambertz and colleagues monitored the reading abilities and brain changes of ten six-year old children to track the emergence of word specialization during a critical development  period. Over the course of their first school-year, children were assessed every two months with reading evaluations and functional MRI while viewing words and non-word images (houses, objects, faces, bodies). As expected, reading ability improved over the year of first grade, as demonstrated by increased reading speed, word span, and phoneme knowledge, among other measures.

Even at this young age, when reading ability was newly acquired, words evoked widespread left-lateralized brain activation. This activity increased over the year of school, with the greatest boost occurring after just the first few months. Importantly, there were no similar activation increases in response to other stimuli, confirming that these adaptations were specific to reading ability, not a general effect of development or education. Immediately after school began, the brain volume specialized for reading also significantly increased. Furthermore, reading speed was associated with greater activity, particularly in the VWFA. The researchers found that activation patterns to words became more reliable with learning. In contrast, the patterns for other categories remained stable, with the exception of numbers, which may reflect specialization for symbols (words and numbers) generally, or correlation with the simultaneous development of mathematics skills.

What predisposes one brain region over another to take on this specialized role for reading words? Before school, there was no strong preference for any other category in regions that would later become word-responsive. However, brain areas that were destined to remain “non-word” regions showed more stable responses to non-word stimuli even before learning to read. Thus, perhaps the brain takes advantage of unoccupied real-estate to perform the newly acquired skill of reading.

These findings add a critical piece to the puzzle of how reading skills are acquired in the developing child brain. Though it was already known that reading recruits a specialized brain region for words, this study reveals that this occurs without changing the organization of areas already specialized for other functions. The authors propose an elegant model for the developmental brain changes underlying reading skill acquisition. In the illiterate child, there are adjacent columns or patches of cortex either tuned to a specific category, or not yet assigned a function. With literacy, the free subregions become tuned to words, while the previously specialized subregions remain stable.

The rapid emergence of the word area after just a brief learning period highlights the remarkable plasticity of the developing cortex. In individuals who become literate as adults, the same VWFA is present. However, in contrast to children, the relation between reading speed and activation in this area is weaker in adults, and a single adult case-study by the authors showed a much slower, gradual development of the VWFA over a prolonged learning period of several months. Whatever the reason, this region appears primed to rapidly adopt novel representations of symbolic words, and this priming may peak at a specific period in childhood. This finding underscores the importance of a strong education in youth. The authors surmise that “the success of education might also rely on the right timing to benefit from the highest neural plasticity. Our results might also explain why numerous academic curricula, even in ancient civilizations, propose to teach reading around seven years.”

The figure below shows different skills mapped to different sites in the brain before schooling and then with and without school. Labile sites refer to sites that are not currently mapped to a particular skill.

Screen shot 2020 08 20 at 3.23.45 pm

Based on the information given in the passage and the figure, which of the following is true?

Possible Answers:

Words associated with particular objects are always mapped onto the region next to where information about the object is formed.

New information associated with words is mapped onto labile sites rather than onto sites already dedicated to a particular skill.

Becoming literate is more difficult for adults because many of the sites that could be attuned to words are already tuned to other objects.

Students who become literate experience a decrease in their ability to recognize faces.

Correct answer:

New information associated with words is mapped onto labile sites rather than onto sites already dedicated to a particular skill.

Explanation:

This question asks you to draw a valid conclusion from the information given in the graph. And since it gives no information or context as to what you're looking for, the best course of action is simply to examine each answer choice and determine which has support within the graph and which does not. "Students who become literate experience a decrease in their ability to recognize faces" can be eliminated based on a careful examination of the figure. Between the starting figure and the literate figure, none of the sites dedicated to faces goes away. There are just fewer additional sites dedicated to faces in the literate figure than in the non-schooled figure. "Words associated with particular objects are always mapped onto the region next to where information about the object is formed" can also be eliminated since the figure doesn't give any indication as to the type of words mapped in the word areas, so there is no way to tell if this is true. "Becoming literate is more difficult for adults because many of the sites that could be attuned to words are already tuned to other objects" can similarly not be supported by the figure since the figure doesn't address the difference between adults and children. "New information associated with words is mapped onto labile sites rather than onto sites already dedicated to a particular skill" is clearly supported by the figure, however. Word sites are only mapped onto sites that were previously unoccupied by other skills, supporting the idea that new words are only mapped onto labile sites.

Example Question #751 : New Sat

The passage is adapted from Ngonghala CN, et. al’s “Poverty, Disease, and the Ecology of Complex Systems” © 2014 Ngonghala et al.

In his landmark treatise, An Essay on the Principle of Population, Reverend Thomas Robert Malthus argued that population growth will necessarily exceed the growth rate of the means of subsistence, making poverty inevitable. The system of feedbacks that Malthus posited creates a situation similar to what social scientists now term a “poverty trap”: i.e., a self-reinforcing mechanism that causes poverty to persist. Malthus’s erroneous assumptions, which did not account for rapid technological progress, rendered his core prediction wrong: the world has enjoyed unprecedented economic development in the ensuing two centuries due to technology-driven productivity growth. 

Nonetheless, for the billion people who still languish in chronic extreme poverty, Malthus’s ideas about the importance of biophysical and biosocial feedback (e.g., interactions between human behavior and resource availability) to the dynamics of economic systems still ring true. Indeed, while they were based on observations of human populations, Malthus ideas had reverberations throughout the life sciences. His insights were based on important underlying processes that provided inspiration to both Darwin and Wallace as they independently derived the theory of evolution by natural selection. Likewise, these principles underlie standard models of population biology, including logistic population growth models, predator-prey models, and the epidemiology of host-pathogen dynamics. 

The economics literature on poverty traps, where extreme poverty of some populations persists alongside economic prosperity among others, has a history in various schools of thought. The most Malthusian of models were advanced later by Leibenstein and Nelson, who argued that interactions between economic, capital, and population growth can create a subsistence-level equilibrium. Today, the most common models of poverty traps are rooted in neoclassical growth theory, which is the dominant foundational framework for modeling economic growth. Though sometimes controversial, poverty trap concepts have been integral to some of the most sweeping efforts to catalyze economic development, such as those manifest in the Millennium Development Goals. 

The modern economics literature on poverty traps, however, is strikingly silent about the role of feedbacks from biophysical and biosocial processes. Two overwhelming characteristics of under-developed economies and the poorest, mostly rural, subpopulations in those countries are (i) the dominant role of resource-dependent primary production—from soils, fisheries, forests, and wildlife—as the root source of income and (ii) the high rates of morbidity and mortality due to parasitic and infectious diseases. For basic subsistence, the extremely poor rely on human capital that is directly generated from their ability to obtain resources, and thus critically influenced by climate and soil that determine the success of food production. These resources in turn influence the nutrition and health of individuals, but can also be influenced by a variety of other biophysical processes. For example, infectious and parasitic diseases effectively steal human resources for their own survival and transmission. Yet scientists rarely integrate even the most rudimentary frameworks for understanding these ecological processes into models of economic growth and poverty. 

This gap in the literature represents a major missed opportunity to advance our understanding of coupled ecological-economic systems. Through feedbacks between lower-level localized behavior and the higher-level processes that they drive, ecological systems are known to demonstrate complex emergent properties that can be sensitive to initial conditions. A large range of ecological systems—as revealed in processes like desertification, soil degradation, coral reef bleaching, and epidemic disease—have been characterized by multiple stable states, with direct consequences for the livelihoods of the poor. These multiple stable states, which arise from nonlinear positive feedbacks, imply sensitivity to initial conditions. 

While Malthus’s original arguments about the relationship between population growth and resource availability were overly simplistic (resulting in only one stable state of subsistence poverty), they led to more sophisticated characterizations of complex ecological processes. In this light, we suggest that breakthroughs in understanding poverty can still benefit from two of his enduring contributions to science: (i) models that are true to underlying mechanisms can lead to critical insights, particularly of complex emergent properties, that are not possible from pure phenomenological models; and (ii) there are significant implications for models that connect human economic behavior to biological constraints.

Screen shot 2020 09 28 at 11.24.54 am

Which of the following best describes how the data in the two graphs supports Malthus’s prediction that population growth will necessarily exceed the growth rate of the means of subsistence, making poverty an inevitable consequence?

Possible Answers:

It contradicts Malthus’s prediction because it demonstrates that poverty remains highest in the same regions of the world year after year.

It supports Malthus’s prediction because it shows that poverty is still a major problem in the world.

It contradicts Malthus’s prediction because it shows that poverty is decreasing even while the population is increasing.

It supports Malthus’s prediction because it demonstrates that poverty is a problem that can be solved in certain regions.

Correct answer:

It contradicts Malthus’s prediction because it shows that poverty is decreasing even while the population is increasing.

Explanation:

Malthus predicts that poverty will become more of a problem as the population increases, but the second graph shows that poverty decreased significantly over the 25 years for which the first graph demonstrates population growth. The graphs, therefore, contradict Malthus’s prediction. Accordingly, you can eliminate both answer options that begin, “it supports Malthus’s prediction.

The question then hinges on why the graphs contradict Malthus. “It contradicts Malthus’s prediction because it shows that poverty is decreasing even while the population is increasing” provides the proper description: Malthus suggested that poverty would increase along with population, but the graphs show otherwise. As the population has increased, poverty has decreased.

Showing that poverty remains a large problem wouldn’t contradict Malthus, whose prediction was that poverty would be an inevitable consequence of population growth. So even if the explanation in “It supports Malthus’s prediction, because it demonstrates that poverty is a problem that can be solved in certain regions” is true, it doesn’t perform the contradictory function. Furthermore, although the phenomenon in this answer option is true for two regions – both South Asia and Sub-Saharan Africa began and ended the 25 years with the highest percentage of people in extreme poverty – it doesn’t hold for East Asia, which went from a near tie for the most poverty-stricken region to among the lowest percentages of extreme poverty.

Example Question #1 : Graphs And Tables

The passage is adapted from Ngonghala CN, et. al’s “Poverty, Disease, and the Ecology of Complex Systems” © 2014 Ngonghala et al.

In his landmark treatise, An Essay on the Principle of Population, Reverend Thomas Robert Malthus argued that population growth will necessarily exceed the growth rate of the means of subsistence, making poverty inevitable. The system of feedbacks that Malthus posited creates a situation similar to what social scientists now term a “poverty trap”: i.e., a self-reinforcing mechanism that causes poverty to persist. Malthus’s erroneous assumptions, which did not account for rapid technological progress, rendered his core prediction wrong: the world has enjoyed unprecedented economic development in the ensuing two centuries due to technology-driven productivity growth. 

Nonetheless, for the billion people who still languish in chronic extreme poverty, Malthus’s ideas about the importance of biophysical and biosocial feedback (e.g., interactions between human behavior and resource availability) to the dynamics of economic systems still ring true. Indeed, while they were based on observations of human populations, Malthus ideas had reverberations throughout the life sciences. His insights were based on important underlying processes that provided inspiration to both Darwin and Wallace as they independently derived the theory of evolution by natural selection. Likewise, these principles underlie standard models of population biology, including logistic population growth models, predator-prey models, and the epidemiology of host-pathogen dynamics. 

The economics literature on poverty traps, where extreme poverty of some populations persists alongside economic prosperity among others, has a history in various schools of thought. The most Malthusian of models were advanced later by Leibenstein and Nelson, who argued that interactions between economic, capital, and population growth can create a subsistence-level equilibrium. Today, the most common models of poverty traps are rooted in neoclassical growth theory, which is the dominant foundational framework for modeling economic growth. Though sometimes controversial, poverty trap concepts have been integral to some of the most sweeping efforts to catalyze economic development, such as those manifest in the Millennium Development Goals. 

The modern economics literature on poverty traps, however, is strikingly silent about the role of feedbacks from biophysical and biosocial processes. Two overwhelming characteristics of under-developed economies and the poorest, mostly rural, subpopulations in those countries are (i) the dominant role of resource-dependent primary production—from soils, fisheries, forests, and wildlife—as the root source of income and (ii) the high rates of morbidity and mortality due to parasitic and infectious diseases. For basic subsistence, the extremely poor rely on human capital that is directly generated from their ability to obtain resources, and thus critically influenced by climate and soil that determine the success of food production. These resources in turn influence the nutrition and health of individuals, but can also be influenced by a variety of other biophysical processes. For example, infectious and parasitic diseases effectively steal human resources for their own survival and transmission. Yet scientists rarely integrate even the most rudimentary frameworks for understanding these ecological processes into models of economic growth and poverty. 

This gap in the literature represents a major missed opportunity to advance our understanding of coupled ecological-economic systems. Through feedbacks between lower-level localized behavior and the higher-level processes that they drive, ecological systems are known to demonstrate complex emergent properties that can be sensitive to initial conditions. A large range of ecological systems—as revealed in processes like desertification, soil degradation, coral reef bleaching, and epidemic disease—have been characterized by multiple stable states, with direct consequences for the livelihoods of the poor. These multiple stable states, which arise from nonlinear positive feedbacks, imply sensitivity to initial conditions. 

While Malthus’s original arguments about the relationship between population growth and resource availability were overly simplistic (resulting in only one stable state of subsistence poverty), they led to more sophisticated characterizations of complex ecological processes. In this light, we suggest that breakthroughs in understanding poverty can still benefit from two of his enduring contributions to science: (i) models that are true to underlying mechanisms can lead to critical insights, particularly of complex emergent properties, that are not possible from pure phenomenological models; and (ii) there are significant implications for models that connect human economic behavior to biological constraints.

Screen shot 2020 09 28 at 11.24.54 am

Which of the following conclusions is best supported by the two graphs?

Possible Answers:

In 1999, there were more people living in extreme poverty in South Asia than in East Asia and the Pacific.

As of 2015, less than 3.5 billion people in the world lived in extreme poverty.

Fewer people in Latin America & the Caribbean lived in poverty in 2008 than in 2002.

Extreme poverty is not a major concern in Europe & Central Asia.

Correct answer:

As of 2015, less than 3.5 billion people in the world lived in extreme poverty.

Explanation:

Every now and then on an SAT reading passage, a question might ask us to do a bit of math to prove the correct answer. In this case,  From the second graph, you can conclude that the worldwide percentage of people living in extreme poverty is less than 45%. Why? Because you know that the six other regions have a percentage less than the percentage in SubSaharan Africa, which is somewhere between 40 and 50% on the graph. So even though you cannot find the exact percentage, you know that there is a limit at or below 45%. And since the first graph tells you that the world population in 2015 was approximately 7.25 billion people, you then know that the number of people living in extreme poverty is less than 45% of 7.25 billion. The 3.5 billion people threshold in our correct answer is safely above that level, so you can firmly conclude that “As of 2015, less than 3.5 billion people in the world lived in extreme poverty.”

Two of our remaining choices are guilty of the same error. Both “In 1999, there were more people living in extreme poverty in South Asia than in East Asia and the Pacific” and “Fewer people in Latin America & the Caribbean lived in poverty in 2008 than in 2002” try  to make a numerical comparison solely from the data in the second graph. Since this graph only gives information as a percent of the people in each region at each time, you cannot make the necessary comparisons.

Finally, “Extreme poverty is not a major concern in Europe & Central Asia”  is incorrect because it goes beyond what can be proven by data. “ … is not a major concern” is a value judgment: sure, in comparison to the other regions one mightsay that Europe & Central Asia have less of a problem with extreme poverty, but to those >1% of people who meet that standard, poverty is a major concern! Absent a definition in the problem, you cannot conclude whether something is a major or minor concern.

Example Question #1 : Relating Graphics To The Passage

The passage is adapted from Ngonghala CN, et. al’s “Poverty, Disease, and the Ecology of Complex Systems” © 2014 Ngonghala et al.

In his landmark treatise, An Essay on the Principle of Population, Reverend Thomas Robert Malthus argued that population growth will necessarily exceed the growth rate of the means of subsistence, making poverty inevitable. The system of feedbacks that Malthus posited creates a situation similar to what social scientists now term a “poverty trap”: i.e., a self-reinforcing mechanism that causes poverty to persist. Malthus’s erroneous assumptions, which did not account for rapid technological progress, rendered his core prediction wrong: the world has enjoyed unprecedented economic development in the ensuing two centuries due to technology-driven productivity growth. 

Nonetheless, for the billion people who still languish in chronic extreme poverty, Malthus’s ideas about the importance of biophysical and biosocial feedback (e.g., interactions between human behavior and resource availability) to the dynamics of economic systems still ring true. Indeed, while they were based on observations of human populations, Malthus ideas had reverberations throughout the life sciences. His insights were based on important underlying processes that provided inspiration to both Darwin and Wallace as they independently derived the theory of evolution by natural selection. Likewise, these principles underlie standard models of population biology, including logistic population growth models, predator-prey models, and the epidemiology of host-pathogen dynamics. 

The economics literature on poverty traps, where extreme poverty of some populations persists alongside economic prosperity among others, has a history in various schools of thought. The most Malthusian of models were advanced later by Leibenstein and Nelson, who argued that interactions between economic, capital, and population growth can create a subsistence-level equilibrium. Today, the most common models of poverty traps are rooted in neoclassical growth theory, which is the dominant foundational framework for modeling economic growth. Though sometimes controversial, poverty trap concepts have been integral to some of the most sweeping efforts to catalyze economic development, such as those manifest in the Millennium Development Goals. 

The modern economics literature on poverty traps, however, is strikingly silent about the role of feedbacks from biophysical and biosocial processes. Two overwhelming characteristics of under-developed economies and the poorest, mostly rural, subpopulations in those countries are (i) the dominant role of resource-dependent primary production—from soils, fisheries, forests, and wildlife—as the root source of income and (ii) the high rates of morbidity and mortality due to parasitic and infectious diseases. For basic subsistence, the extremely poor rely on human capital that is directly generated from their ability to obtain resources, and thus critically influenced by climate and soil that determine the success of food production. These resources in turn influence the nutrition and health of individuals, but can also be influenced by a variety of other biophysical processes. For example, infectious and parasitic diseases effectively steal human resources for their own survival and transmission. Yet scientists rarely integrate even the most rudimentary frameworks for understanding these ecological processes into models of economic growth and poverty. 

This gap in the literature represents a major missed opportunity to advance our understanding of coupled ecological-economic systems. Through feedbacks between lower-level localized behavior and the higher-level processes that they drive, ecological systems are known to demonstrate complex emergent properties that can be sensitive to initial conditions. A large range of ecological systems—as revealed in processes like desertification, soil degradation, coral reef bleaching, and epidemic disease—have been characterized by multiple stable states, with direct consequences for the livelihoods of the poor. These multiple stable states, which arise from nonlinear positive feedbacks, imply sensitivity to initial conditions. 

While Malthus’s original arguments about the relationship between population growth and resource availability were overly simplistic (resulting in only one stable state of subsistence poverty), they led to more sophisticated characterizations of complex ecological processes. In this light, we suggest that breakthroughs in understanding poverty can still benefit from two of his enduring contributions to science: (i) models that are true to underlying mechanisms can lead to critical insights, particularly of complex emergent properties, that are not possible from pure phenomenological models; and (ii) there are significant implications for models that connect human economic behavior to biological constraints.

Screen shot 2020 09 28 at 11.24.54 am

If the author is correct that technology is allowing larger populations to survive with decreased poverty, which of the following statements would best explain the data displayed in the second graph?

Possible Answers:

East Asia underwent a major technological revolution between the years 1990 and 2015.

Technology growth stagnated worldwide between 1996 and 1999.

Sub-Saharan Africa underwent a surge in technological development from 1990 to 2000, at which point that development slowed down.

There has been no technological development in Latin America and the Caribbean since 1990.

Correct answer:

East Asia underwent a major technological revolution between the years 1990 and 2015.

Explanation:

This problem asks you which answer choice would best explain the data in the second graph, meaning that the correct answer choice is information that would lead to the changes in extreme poverty in the chart. The answer choice, then, needs to be a cause for which the chart is an effect. You’re also told that the answers will relate to the author’s prediction that technology allows higher populations to survive with decreased poverty, meaning that you want to pair decreases in poverty with increases in technology.

Even though the line for Latin America & the Caribbean is flatter than other lines, it still trends downward over time. “No technological development” would not lead to a decrease in poverty. Thus, we can eliminate “There has been no technological development in Latin America and the Caribbean since 1990.” “Sub-Saharan Africa underwent a surge in technological development from 1990 to 2000, at which point that development slowed down” is also incorrect. The poverty rate in Sub-Saharan Africa declined sharpest after 2000, so one would think that its technological advances continued well into the 2000-2015 period. 

Additionally, although a few lines were at their flattest during that period, in South Asia and in Europe & Central Asia the lines dropped, so it is likely that there was still significant technological development occurring in the world during that period. And since the years immediately following saw major decreases in poverty in all regions, one would think that some of the technological development in that short period helped to cause the later decreases in poverty. Thus, we can eliminate “Technology growth stagnated worldwide between 1996 and 1999.” We do, however, have a powerful rationale for the statement “East Asia underwent a major technological revolution between the years 1990 and 2015.” East Asia saw a dramatic decrease in poverty across the period graphed. A major technological revolution would certainly help to explain why that region was able to cut poverty rates so substantially. Thus, “East Asia underwent a major technological revolution between the years 1990 and 2015” is our correct answer.

Example Question #2 : Graphs And Tables

The following is adapted from a published article entitled “Dilemmas in Data, the Uncertainty of Impactors on CO2 Emissions.” (2019)

Proposed CO2 reduction schemes present large uncertainties in terms of the perceived reduction needs and the potential costs of achieving those reductions. In one sense, preference for a carbon tax or tradable permit system depends on how one views the uncertainty of costs involved and benefits to be received.

For those confident that achieving a specific level of CO2 reduction will yield very significant benefits then a tradeable permit program may be most appropriate. CO2 emissions would be reduced to a specific level, and in the case of a tradeable permit program, the cost involved would be handled efficiently, but not controlled at a specific cost level. This efficiency occurs because control efforts are concentrated at the lowest-cost emission sources through the trading of permits.

However, if one is more uncertain about the benefits of a specific level of reduction then a carbon tax may be most appropriate. In this approach, the level of the tax effectively caps the marginal control costs that affected activities would have to pay under the reduction scheme, but the precise level of CO2 achieved is less certain. Emitters of CO2 would spend money controlling CO2 emissions up to the level of the tax. However, since the marginal cost of control among millions of emitters is not well known, the overall effect of a given tax level on CO2 emission cannot be accurately forecasted.

A recent study was conducted to assess the impact of a carbon tax implemented in 2008 on the petroleum sales of a sample of cities, both those impacted by the tax, and those that were not. Based on this data, it is clear that enforcing limitations, permits, or taxation has some impact on the purchase decisions of those involved, but the extent of this impact and the best steps for achieving a reduction in carbon emissions remain unknown. In order to more thoroughly understand the impact of these methods on the purchasing decision, and thus, the emissions impact of individuals, further studies will be required.

Screen shot 2020 09 29 at 11.14.51 am

The data presented in the graph best supports which of the following excerpts from the text?

Possible Answers:

“Preference for a carbon tax or tradable permit system depends on how one views the uncertainty of costs involved and benefits to be received.”

“It is clear that enforcing limitations, permits, or taxation has some impact on the purchase decisions of those involved.”

“Proposed CO2 reduction schemes present large uncertainties in terms of the perceived reduction needs and the potential costs of achieving those reductions.”

“Since the marginal cost of control among millions of emitters is not well known, the overall effect of a given tax level on CO2 emission cannot be accurately forecasted.”

Correct answer:

“It is clear that enforcing limitations, permits, or taxation has some impact on the purchase decisions of those involved.”

Explanation:

While all of the listed options align with the author’s perspective, we’re looking for what is directly supported by the graph. The graph displays the petroleum purchases of two groups - one that was impacted by a carbon tax in 2008, and another that was not impacted by this tax. The graph loosely reflects that the impacted group responded by decreasing their collective per capita consumption of petroleum, while the unimpacted group continued to see its per capita consumption rise. This leads us to the conclusion that the individuals’ decisions were likely impacted by the tax, or that “It is clear that enforcing limitations, permits, or taxation has some impact on the purchase decisions of those involved.” While the passage as a whole speaks about the uncertainty of attempting to predict the impact of carbon taxes and permits on the consumption of individuals, the graph displays known data and a trend that can be derived from that data, not unknowns. So, we can eliminate the remaining three options, which all address the remaining uncertainty regarding tax levels and their direct impact on CO2 emissions.

Example Question #3 : Graphs And Tables

The following is adapted from a published article entitled “Dilemmas in Data, the Uncertainty of Impactors on CO2 Emissions.” (2019)

Proposed CO2 reduction schemes present large uncertainties in terms of the perceived reduction needs and the potential costs of achieving those reductions. In one sense, preference for a carbon tax or tradable permit system depends on how one views the uncertainty of costs involved and benefits to be received.

For those confident that achieving a specific level of CO2 reduction will yield very significant benefits then a tradeable permit program may be most appropriate. CO2 emissions would be reduced to a specific level, and in the case of a tradeable permit program, the cost involved would be handled efficiently, but not controlled at a specific cost level. This efficiency occurs because control efforts are concentrated at the lowest-cost emission sources through the trading of permits.

However, if one is more uncertain about the benefits of a specific level of reduction then a carbon tax may be most appropriate. In this approach, the level of the tax effectively caps the marginal control costs that affected activities would have to pay under the reduction scheme, but the precise level of CO2 achieved is less certain. Emitters of CO2 would spend money controlling CO2 emissions up to the level of the tax. However, since the marginal cost of control among millions of emitters is not well known, the overall effect of a given tax level on CO2 emission cannot be accurately forecasted.

A recent study was conducted to assess the impact of a carbon tax implemented in 2008 on the petroleum sales of a sample of cities, both those impacted by the tax, and those that were not. Based on this data, it is clear that enforcing limitations, permits, or taxation has some impact on the purchase decisions of those involved, but the extent of this impact and the best steps for achieving a reduction in carbon emissions remain unknown. In order to more thoroughly understand the impact of these methods on the purchasing decision, and thus, the emissions impact of individuals, further studies will be required.

Screen shot 2020 09 29 at 11.14.51 am

Which of the following, if true, would weaken the use of the graph to draw the conclusion that “it is clear that enforcing limitations, permits, or taxation has some impact on the purchase decisions of those involved”?

Possible Answers:

There were more individuals in the samples not subject to the carbon tax than subject to the carbon tax.

There were more individuals in the samples subject to the carbon tax than not subject to the carbon tax.

The individuals involved in the study were unaware of the implementation of the carbon tax.

Changes to the landscape of fuel-efficient vehicles held the potential to impact petroleum purchases during this time.

Correct answer:

The individuals involved in the study were unaware of the implementation of the carbon tax.

Explanation:

In this example, we’re looking for something that, if true, breaks up the connection between the graph and the conclusion that “it is clear that enforcing limitations, permits, or taxation has some impact on the purchase decisions of those involved.” So, we want something that tells us it might not make sense to use this data to draw the given conclusion. The sample sizes - whether the number of individuals or the number of cities - are irrelevant to us here, since the data is already presented to us in “per capita” or per person format. Additionally, while “Changes to the landscape of fuel-efficient vehicles held the potential to impact petroleum purchases during this time,” we have no reason to believe that these changes would have impacted those with vs. without the impact of a carbon tax differently. So, this leaves us with our correct answer: “The individuals involved in the study were unaware of the implementation of the carbon tax.” If those involved in the study didn’t know about the carbon tax, we can no longer conclude that it is because of this knowledge that they changed their petroleum purchasing tendencies.

Example Question #2 : Relating Graphics To The Passage

This passage is adapted from “Flagship Species and Their Role in the Conservation Movement” (2020)

Until recently, two schools of thought have dominated the field of establishing “flagship” endangered species for marketing and awareness campaigns. These flagship species make up the subset of endangered species conservation experts utilize to elicit public support - both financial and legal - for fauna conservation as a whole. 

The first concerns how recognizable the general public, the audience of most large-scale funding campaigns, finds a particular species, commonly termed its “public awareness.” This school of thought was built on the foundation that if an individual recognizes a species from prior knowledge, cultural context, or previous conservational and educational encounters (in a zoo environment or classroom setting, for instance) that individual would be more likely to note and respond to the severity of its endangered status. However, recently emerging flagship species such as the pangolin have challenged the singularity of this factor. 

Alongside public awareness, conservation experts have long considered a factor they refer to as a “keystone species” designation in the flagstone selection process. Keystone species are those species that play an especially vital role in their respective habitats or ecosystems. While this metric is invaluable to the environmentalists in charge of designating funds received, recent data has expressed the more minor role a keystone species designation seems to play in the motivations of the public. 

Recent scholarship has questioned both the singularity and the extent to which the above classifications impact the decision making of the general public. Though more complicated to measure, a third designation, known as a species’ “charisma,” is now the yardstick by which most flagship species are formally classified. Addressing the charisma of a species involves establishing and collecting data concerning its ecological (interactions with humans/the environments of humans),  aesthetic (appealing to human emotions through physical appearance and immediately related behaviors), and corporeal (affection and socialization with humans over the short- and long-terms) characteristics. This process has been understandably criticized by some for its costs and failure to incorporate the severity of an endangered species’ status into designation, but its impact on the public has been irrefutable. While keystone and public awareness designations are still often applied in the field because of their practicality and comparative simplicity, charisma is now commonly accepted as the most accurate metric with which to judge a species’ flagship potential. 

The information in the graphs displays the results of a study conducted on a single sample of donors to wildlife conservation efforts. The first displays the percent who stated they were most likely to donate to a cause for each endangered species category based on a brief description of public awareness, keystone designation, and charisma in endangered species, the second graph displays the actual results of their donation choice. Note: each individual prioritized exactly one designation type and donated to exactly one designation type.

Screen shot 2020 09 29 at 11.15.45 am

Based on the information in the graphs and passage above, which of the following can be concluded?

Possible Answers:

The individuals in the study tended to underestimate their likelihood to donate to species based on their recognition of said species.

The charisma designated species received the highest dollar value of donations.

The majority of the individuals in the study actually donated to species based on their recognition of said species.

The individuals in the study tended to overestimate their likelihood to donate to species based on the species’ integral role in its ecosystem.

Correct answer:

The individuals in the study tended to overestimate their likelihood to donate to species based on the species’ integral role in its ecosystem.

Explanation:

In order to effectively tackle this question, we’ll want to ensure we’re familiar with the meanings of each designation system presented in the text/figures. The three designation systems presented are as follows:

Public awareness - “how recognizable the general public finds a particular species”

Keystone designation - “species that play an especially vital role in their respective habitats or ecosystems”

Charisma - “ecological (interactions with humans/the environments of humans),  aesthetic (appealing to human emotions through physical appearance and immediately related behaviors), and corporeal (affection and socialization with humans over the short- and long-terms) characteristics”

So, we can conclude based on the data in the graphs that “The individuals in the study tended to overestimate their likelihood to donate to species based on the species’ integral role in its ecosystem,” since keystone designation made up 48% of the sample’s stated likelihood and only 25% of actual donors. We cannot, however, conclude that “The individuals in the study tended to underestimate their likelihood to donate to species based on their recognition of said species,” since public awareness made up 30% of stated likelihood and only 25% of actual donors. We are also unable to conclude anything about the dollar value of the donations, as the data specifically references the percent of donors, not donation amounts. (Be sure to pay close attention to what type of data we’re being given in graphs and tables!) Nor can we conclude that “The majority of the individuals in the study actually donated to species based on their recognition of said species.” This description seems to refer to public awareness, while Charisma makes up the majority (57%) of actual donors.

Example Question #7 : Relating Graphics To The Passage

This passage is adapted from “Flagship Species and Their Role in the Conservation Movement” (2020)

Until recently, two schools of thought have dominated the field of establishing “flagship” endangered species for marketing and awareness campaigns. These flagship species make up the subset of endangered species conservation experts utilize to elicit public support - both financial and legal - for fauna conservation as a whole. 

The first concerns how recognizable the general public, the audience of most large-scale funding campaigns, finds a particular species, commonly termed its “public awareness.” This school of thought was built on the foundation that if an individual recognizes a species from prior knowledge, cultural context, or previous conservational and educational encounters (in a zoo environment or classroom setting, for instance) that individual would be more likely to note and respond to the severity of its endangered status. However, recently emerging flagship species such as the pangolin have challenged the singularity of this factor. 

Alongside public awareness, conservation experts have long considered a factor they refer to as a “keystone species” designation in the flagstone selection process. Keystone species are those species that play an especially vital role in their respective habitats or ecosystems. While this metric is invaluable to the environmentalists in charge of designating funds received, recent data has expressed the more minor role a keystone species designation seems to play in the motivations of the public. 

Recent scholarship has questioned both the singularity and the extent to which the above classifications impact the decision making of the general public. Though more complicated to measure, a third designation, known as a species’ “charisma,” is now the yardstick by which most flagship species are formally classified. Addressing the charisma of a species involves establishing and collecting data concerning its ecological (interactions with humans/the environments of humans),  aesthetic (appealing to human emotions through physical appearance and immediately related behaviors), and corporeal (affection and socialization with humans over the short- and long-terms) characteristics. This process has been understandably criticized by some for its costs and failure to incorporate the severity of an endangered species’ status into designation, but its impact on the public has been irrefutable. While keystone and public awareness designations are still often applied in the field because of their practicality and comparative simplicity, charisma is now commonly accepted as the most accurate metric with which to judge a species’ flagship potential. 

The information in the graphs displays the results of a study conducted on a single sample of donors to wildlife conservation efforts. The first displays the percent who stated they were most likely to donate to a cause for each endangered species category based on a brief description of public awareness, keystone designation, and charisma in endangered species, the second graph displays the actual results of their donation choice. Note: each individual prioritized exactly one designation type and donated to exactly one designation type.

Screen shot 2020 09 29 at 11.15.45 am

The information in the graphs best supports which of the following statements in the passage?

Possible Answers:

“its impact on the public has been irrefutable.” (paragraph four)

“this metric is invaluable to the environmentalists in charge of designating funds received.” (paragraph three)

“keystone and public awareness designations are still often applied in the field because of their practicality and comparative simplicity.” (paragraph four)

“a third designation, known as a species’ “charisma,” is now the yardstick by which most flagship species are formally classified.” (paragraph four)

Correct answer:

“its impact on the public has been irrefutable.” (paragraph four)

Explanation:

In this example, we’re looking for what is most directly supported by the information provided in the graphs. The graphs could support many different elements of the text, so we’re going to want to use process of elimination. The graphs provide us with data about the motivations of the public, both what they state, and how they actually tend to donate. So, we are unable to support statements about the decisions of campaigns and what processes they tend to lean on. With this in mind, “this metric is invaluable to the environmentalists in charge of designating funds received,” ““a third designation, known as a species’ “charisma,” is now the yardstick by which most flagship species are formally classified,” and “keystone and public awareness designations are still often applied in the field because of their practicality and comparative simplicity,” while all true according to the passage, can be eliminated, as they are not supported by the *type* of data presented to us in the graphs. This leaves us with our correct answer, “its impact on the public has been irrefutable.” If we look back to the context, we can see that we are referring to the Charisma designation species. Our graphs certainly support the idea that Charisma tends to influence the public, as we can see that the designation system is responsible for motivating the actual donations of the majority of the donors in the study.

Learning Tools by Varsity Tutors