Student autonomy
Covered by Inside Higher Ed, Times Higher Education, Futurity
What if the key to boosting college students’ attendance and performance isn’t stricter rules, but more freedom? My work with Danny Oppenheimer, published in Science Advances on July 17, reveals the importance of giving students control over their own learning.
In one experiment, students were given the choice to make their own attendance mandatory. Falsifying faculty expectations, 90% of students chose to do so, committing themselves to attending class reliably or to having their final grades docked. Under this “optional-mandatory attendance” policy, students came to class more reliably than students whose attendance had been mandated. The pattern has held true. Across five classes of 60-200 students, 73-95% opted for mandatory attendance and at most 10% regretted their decision by semester's end.
A second experiment produced equally surprising results. When given the option to switch to an easier homework stream at any time prior to midterms, 85-90% of students chose to tackle the more challenging work. The “optional-mandatory homework” policy led students to spend more time on their assignments and to learn more over the semester compared to students who were compelled to complete the same work.
Dangerous Ideas in Science and Society
I teach a class called 'Dangerous Ideas in Science and Society'. It's grounded in the conviction that open, rigorous, and charitable discussion is our best tool for deeply understanding arguments and strengthening our own beliefs. A robust literature in the psychology of reasoning shows that engaging seriously with opposing viewpoints is the closest thing we have to an epistemic immune system. In Dangerous Ideas, students are confronted by arguments they're inclined to reject—and encouraged to defend and explore their own views. The class explores arguments about education, speech, identity, abortion, disability, gun rights, immigration, and the existence of God, among other topics.
Inside Higher Ed featured the class and my lab's ed-focused AI projects
Argument visualization
Imagine Microsoft invents an ultra-realistic toy guitar: It lets you do something a bit like what we call "playing air guitar", but unlike an air guitar, Microsoft's toy will make you sound like Jimi Hendrix. How could you improve the public's guitar skills, if people who can't strum a chord to save their life were indistinguishable from talented guitarists? How would teachers know when a student had mastered a simple strumming pattern? How would they diagnose errors? In the real world, many college students do something in their midterms and final papers that looks like engaging with an argument. And sometimes things are as they seem. But the woolly medium of student prose often masks deep problems in students' understanding. It's hard for their teachers to detect these problems because, like a Microsoft guitar, prose creates misleading evidence of student ability, and the curse of knowledge makes it even more difficult for instructors to assess students' true understanding.
My colleagues and I have found that a technique called "argument visualization" can help students and instructors to overcome these challenges. You can read about our experiment with first-year college students in seven small (15-student) seminars at nature.com, and you can see some students in action in the short video near this text. I maintain materials for beginners on my website, PhilosophyMapped, and I will soon launch another site, ArgumentBase.org, which provides an organized and searchable Wiki-like resource for sharing and collaboratively refining short, intriguing argumentative texts. I hope these materials will help teachers integrate argument visualization into their classes and provide members of the public with interesting arguments for practice and thinking.
Read Higher Ed Dive's coverage of how argument visualization can be especially useful when teaching controverisal topics.
Discussion markets for fair, automated discussion moderation
At root, moderating a discussion involves distributing a scarce resource—the total time available for speaking to the group. But even trained moderators usually can't access the information needed to promote really great discussions: Who most strongly desires to speak at a given moment? What does she plan to say? How valuable would her contribution be now relative to the contributions that other participants would make instead? Indeed, whenever a group of people have a discussion (in classrooms, faculty meetings, conferences, boardrooms, the UN, etc.) they face a problem: Who gets to speak and when? The most common solution—a moderator selects from among people who've raised their hands—seems terrible for many reasons. How do different moderation conventions affect learning, problem solving, and decision making, and how can we tell?
In ongoing studies, my collaborators and I are investigating how different moderation conventions affect the quality of discussions, problem solving, and decision making. Automated discussion moderation, and especially what I call a discussion market, can dramatically improve the quality of discussion. In a discussion market, an app supplies each participant with currency and auctions speaking time. When many people wish to contribute, the cost to speak rises, encouraging participants to defer contributions that they judge to be less urgent. Shy participants—who have contributed relatively little—can make higher bids and can therefore be prioritized in the queue.
We plan to study queuing systems by having groups of participants work on difficult problems (e.g., puzzles, LSAT Logical Reasoning questions, etc.) in the lab and online. Pilot results indicate that discussion markets do far better than (a) having each participant speak in turn, (b) having participants self-moderate, and (c) having one participant moderate using standard hand raising conventions.
I talk about automated discussion moderation in my TEDxPrinceton presentation, Are we hearing the best ideas at the table?
Better reasoning and communication
This project uses visualizations of the logical structures of arguments to help people reason and communicate more effectively. In preliminary experiments (since replicated), Vidushi Sharma and I found that visual presentation can dramatically reduce confirmation bias compared to prose, at least with some target arguments (short report shared on PhilPapers). In current studies, Danny Oppenheimer and I investigate whether this effect depends on the target argument appealing to values shared by the audience.
I have several other projects that aim to improve reasoning and communication, often focused on difficult moral, political and philosophical questions. So far my colleagues and I have produced:
MindMup AV (https://argument.mindmup.com), a free and open-source app for argument analysis currently used in at least 37 US educational institutions and by people all over the world who find it online (with Gojko Adzic, Neuri Consulting, and other members of the MindMup team),
PhilosophyMapped (https://philmaps.com), a website supporting analytical-reasoning pedagogy used by instructors at 12 US institutions (at least) and by thousands of people internationally,
ArgumentBase (https://argumentbase.org) a new Wikipedia-like website for collecting, sharing, and collaboratively refining philosophical and other arguments (ask me for an account if you’re interested; launch planned for late 2019), and
Palaver, an app that implements a market-based approach (that I call a “discussion market”) to dramatically improve group discussion and decision making skills.
This third set of projects has practical applications in education, industry and civic life. In the long run, I hope they will also help to deepen our understanding of human learning, communication, and decision making. A brief introduction to the fourth of these projects follows.
The true-self in folk and scientific psychology
Both moral commonsense and scientific psychology distinguish action caused by persons from action caused by situations. Consider Darley and Batson's “Good Samaritan” study. All participants were asked to give a short sermon in a nearby building, but some were made to feel hurried while others were made to feel relaxed. On their way to the building, both hurried and relaxed participants encountered a man, groaning and slumped in a doorway. Darley and Batson’s main finding: Hurried participants were six times less likely than relaxed ones to offer help. Laypeople and social scientists usually say that this callous behavior was “caused by the situation”. But what does this mean? What inclines us to explain some actions in more agent-focused terms and others in more situation-focused terms?
Social psychologists and philosophers often attempt to articulate the person/situation dichotomy in terms of how agents would behave in slightly different circumstances. According to this view, we prefer to explain hurried participants’ callousness in “situational” terms because we believe that they would offer to help the sick-looking man if they were more relaxed. However, I have found counterexamples that reliably elicit agent-centric explanations for behaviors that are highly sensitive to a factor that could have easily been otherwise (so the standard account predicts situational explanation).
Using a series of empirical studies (some published in Cognition1), I argue for a fundamentally different approach. In my view, people prefer agency-minimizing explanations primarily because they suspect a mismatch in the moral valence of action and person. That is, people often judge that an action is explained by the situation because they judge that a good person is doing a bad deed or a bad person is doing a good deed. If you want to predict whether someone will explain an action by emphasizing the actor's agency or circumstances, you can’t do better than to find out about their moral attitudes toward the action; beliefs about how the actor would have behaved in slightly different circumstances are surprisingly unimportant. If this "mismatch hypothesis" is correct, a central distinction of social science is not scientific at all—it’s normative.
Origins of the "good" true self
If you ask Americans to explain why a young woman terminated her pregnancy, those who believe abortion to be morally evil are likely to emphasize features of her environment: Maybe she was peer-pressured, or perhaps the presence of a clinic in her neighborhood somehow influenced her decision.1 By contrast, those who believe abortion is morally permissible are likely to prefer agency-emphasizing explanations: She is devoted to her career and firmly convinced that it's not the right time to have a kid. I have found this pattern when people considered actions as varied as aborting a pregnancy, having gay sex, converting to Islam, owning slaves in the antebellum South, and identifying as Conservative or Liberal today. I have found it in studies conducted with participants in the United States and in India, and other researchers have found related effects with Russians, Singaporeans, and Colombians,2 as well as Americans.3 Therefore, it seems reasonable to suspect that morals affect explanatory preferences because of a species-typical tendency to represent other agents as normatively aligned with the self (i.e., as "good"). The evidence is tentative as this hypothesis has not been tested with participants isolated from Western culture, but it points to a fascinating question: Could a tendency to represent other agents as normatively aligned with the self, even in the face of clear statistical information to the contrary, confer important fitness benefits on people living in small-scale human societies like those in which all humans lived for roughly a hundred thousand years?
Statistical information appears to exert little influence on whether people prefer to explain morally valenced past behaviors more in terms of the actor's agency or more in terms of his environment, but it powerfully influences people's predictions about what agents will do in the future. For example, in one study, I told participants a story about Mark, an evangelical Christian man who believes that homosexuality is immoral but is himself attracted to other men. In the story, Mark has a bad fight with his father and later that day gives into his erotic urges by having sex with his friend, Bill. When I asked, "Why did Mark have sex with Bill?", high-prejudice participants preferred explanations like "Because he endured a stressful fight with his father earlier that day", while low-prejudice participants preferred to say "Because he’s gay".
Why would male participants who strongly endorse the claim "Male homosexuality is disgusting" think that that Mark's fight with his father is an especially good explanation? Presumably, they don't worry about ending up in bed with another man after fighting with their own fathers. A second study suggests a possible resolution. In this study, participants read the same story and again evaluated potential explanations, but this time, while some participants read that Mark has often been attracted to Bill and other men, others read that Mark is rarely attracted to Bill or other men. All participants then evaluated potential explanations for their encounter and estimated the probability that the Mark would have sex with Bill in less trying, future circumstances. An interesting result emerged: participants who read that Mark often finds Bill and other men attractive were, again, no more likely to emphasize Mark's agency when explaining his behavior; however, they were much more likely to predict that he would have sex with Bill in the future.
I argue that this seemingly odd combination of sensitivity and insensitivity to evaluative and statistical information is designed to solve important adaptive problems induced by humans’ reliance on social norms. First, social norms require a veneer of impartiality to win alliance from most members of a cooperating group.3 But norm followers must not be too impartial, lest they provide benefits or absorb costs for those in whom they have no fitness stake4—a risk that might be called "Peter Singer Too Soon". A bias to represent others as "deep down" normatively aligned with the self may help to balance these competing requirements. In small-scale societies, where territorial boundaries are strictly enforced and warfare between tribes is common,5 non-cooperators are far more likely to have competing interests than are cooperators. A bias to represent others as “deep down” normatively aligned with the self will cause such agents to be less sensitive to out-group interests and more willing to impose costs on out-group members for relatively small gains to in-group and self. The same effect could be achieved simply by devaluing out-group members' welfare directly.5 But if agents are uncertain about their current or future status within a group, over-reliance on such direct parochialism may undermine norm acceptance among potential cooperators.
Belief in the "good" true self may also help to restore valuable relationships after intergroup and within-group conflicts. How do people in small-scale societies that lack centralized legal systems cause others to value their welfare, instrumentally or intrinsically? Much research suggests that cognitive systems for revenge and forgiveness, respectively, provide an important part of the answer:6 If I know you to be vengeful, I will be careful not to impose costs on you for relatively small benefits to myself. However, revenging wrongs committed by vengeful agents can itself be costly, converting potential allies into foes and, in the worst cases, triggering disastrous cycles of revenge and counter-revenge (e.g., blood feuds). Plausibly, an optimal strategy is to revenge norm violations, through violence or by other means, and then attempt reconciliation if the expected value of further cooperation is positive. A tantalizing possibility is that belief in the "good" true self is adaptive partly because it promotes this strategy.
If true, this hypothesis would help explain why victims of serious norm violations (e.g., genocidal violence) are sometimes drawn to agency-minimizing explanations of their victimizer's actions: If the norm-violating behavior was caused by circumstances that are unlikely to recur and the victim faces a shortage of potential cooperative partners, then “forgiveness” can have positive expected value, even in response to callous crimes. By offering forgiveness the victim can impress the victimizer with beneficence, helping to restore relations and making it more likely that the two will cooperate if beneficial opportunities arise. For example, Deogratias Habyarimana helped kill Cesarie Mukabutera’s children during the Rwandan genocide. When the two were interviewed and photographed together two decades later,7 Habyarimana said: “I told her what happened. When she granted me pardon, all the things in my heart that had made her look at me like a wicked man faded away.” Interestingly, Mukabutera did not see things the same way as Habyarimana: She did not believe that it was his heart that caused his wickedness. “The genocide", she said, "was due to bad governance that set neighbors, brothers and sisters against one another. Now you accept and you forgive. The person you have forgiven becomes a good neighbor. One feels peaceful and thinks well of the future.”
My current work on the topic of the true self investigates these hypotheses from both computational and behavioral perspectives. I am also exploring whether recent work in psychology and neuroscience points to cognitive and neural mechanisms that could plausibly generate such effects.
Implications for normative theory
The mismatch hypothesis also has important implications for philosophical theorizing about moral responsibility. From John Dewy to Angela Smith, many philosophers have been attracted by the idea that we are responsible only for actions which are "self-disclosing"—i.e., for actions that reveal our true selves. On these views, judgments about self-disclosure provide a non-moral basis for judgments about praise and blame. I argue that this is exactly the wrong way round. Self-disclosure judgments reflect the same principles as the person/situation distinction: The concept of the true self and the person/situation distinction are two windows looking onto a single aspect of human psychology. That is, people judge that an action is self-disclosing because they judge that its valence matches the actor’s. If this is correct, self-disclosure judgments do not reveal an action-theoretic relation and cannot provide a non-moral basis for judgments of praise and blame.
Modeling the true self
The mismatch view is not fully precise because it does not articulate the notion of a true self. The simplest story goes like this: Agents possess a unitary moral “essence”, either good or bad, to which the moral valences of their actions are compared. More complex stories relativize agents’ moral characters to particular domains (for example, someone might be both honest and irascibile). These more complex versions suggest a connection between domain-relativized dispositions for action and moral evaluation. An agent who—in some contextually specified sense—usually performs an action of some kind discloses her true self when she performs that kind of action and does not disclose her true self when she performs a contrary type of action. Moral attitudes might then drive self-disclosure judgments by affecting the contextual specification. With my collaborator, Yoaav Isaacs, I explore the prospects for understanding domain-relative dispositions using counterfactuals.8 We show that standard Lewisian semantics are unsatisfactory but that an analysis using Morris Schulz’s "arbitrary selection" semantics9 is more promising, and we describe experimental studies to test our view.
Attribution and responsibility for “group actions”
To some people, the passage of the Patriot Act seemed to express the true nature of the United States (a “dispositional” explanation); to others, it seemed only to reflect the chaotic aftermath of 9/11 (a “situational” explanation).10 What explains these different views? Traditional attribution theory suggests that laypeople do causal attribution by assessing whether agents perform actions only in the presence of a situational pressure or whether they also perform those actions in the absence of that pressure. That is, on the traditional view, when people explain action (and attribute responsibility) they are primarily interested in causal information. In recent work (in preparation), Shamik Dasgupta and I argue that attitudes toward a nation’s “essence” cause people to emphasize more dispositional or situational explanations when explaining that nation’s actions. Paralleling recent experimental work on psychological essentialism,11,12 we argue that people represent nations (and presumably some other natural groups) as possessing underlying causally active moral essences. We then apply experimental paradigms from my work on individual attribution to argue that people rely on the same cognitive processes when explaining group actions, and we consider the implications for political discourse and practice.
[1] Cullen, S. (2018). When do circumstances excuse? Moral prejudices and beliefs about the true self drive preferences for agency-minimizing explanations. Cognition, 180, 165-181.[2] De Freitas, J., Sarkissian, H., Newman, G. E., Grossmann, I., De Brigard, F., Luco, A., & Knobe, J. (2018). Consistent belief in a good true self in misanthropes and three interdependent cultures. Cognitive science, 42, 134-160.[3] Newman, G. E., De Freitas, J., & Knobe, J. (2015). Beliefs about the true self explain asymmetries based on moral judgment. Cognitive Science, 39(1), 96-125.[4] DeScioli, P., & Kurzban, R. (2009). Mysteries of morality. Cognition, 112(2), 281-299.[5] Henrich, J. P., Boyd, R., Bowles, S., Fehr, E., Camerer, C., & Gintis, H. (Eds.). (2004). Foundations of human sociality: Economic experiments and ethnographic evidence from fifteen small-scale societies. Oxford University Press.[6] McCullough, M. E., Kurzban, R., & Tabak, B. A. (2013). Cognitive systems for revenge and forgiveness. Behavioral and Brain Sciences, 36(1).[7] https://www.nytimes.com/interactive/2014/04/06/magazine/06-pieter-hugo-rwanda-portraits.html[8] Cullen, S., & Isaacs, Y. (In preparation). Arbitrary Self.[9] Schulz, M. (2014). Counterfactuals and arbitrariness. Mind, 123(492), 1021-1055.[10] Cullen, S., & Dasgupta, S. (In preparation). Do nations have essences? Attribution and responsibility for national actions.[11] Meyer, M., Leslie, S. J., Gelman, S. A., & Stilwell, S. M. (2013). Essentialist beliefs about bodily transplants in the United States and India. Cognitive Science, 37(4), 668-710.[12] Meyer, M., Gelman, S. A., Roberts, S. O., & Leslie, S. J. (2017). My heart made me do it: Children's essentialist beliefs about heart transplants. Cognitive science, 41(6), 1694-1712.
The Socrates Platform: elicit and measure concepts and other capacities
In social science and philosophy, researchers are often interested in understanding the ordinary concepts that people use to navigate their lives. Think of the mountains of articles and books addressed to the folk concepts of responsibility, freedom, intentional action, knowledge, causation, personhood, and so on. In the new field of experimental philosophy (x-phi), much of this research involves asking participants to judge whether a vignette (often adapted from a philosophical thought-experiment) instantiates a concept of interest. An assumption underlying this research is that naive participants working in a standard experimental context can reliably apply their own concepts correctly to such cases. While this assumption undergirds much work in experimental philosophy and psychology, it is not obviously true, as should be clear to any philosopher who has read Plato’s dialogues: Interlocutors are regularly caught confidently misapplying their own concepts.
Eliciting people’s concepts is not straightforward, whether by introspection and talking-with-colleague-in-the-next-office or by some version of experimental philosophy. In “Survey-Driven Romanticism”, I argued that laypeople’s responses to x-phi surveys are often driven by “pragmatic influences” (implicatures, contrast effects, question order, etc.) and a range of errors (e.g., failure to read/understand the survey). While x-phi surveys have improved (e.g., “really knows” is rarely contrasted with “only believes”), fundamental methodological difficulties remain unaddressed. For, as all philosophers know, people often want to describe hypothetical cases differently after some reflection—philosophically interesting concepts are rarely immediately luminous. Ideally, we should elicit laypeople’s intuitions after they have reached something like reflective equilibrium, but that often requires something or someone external to prompt reflection.
My collaborators, Philip Chapkovski (Higher School of Economics, Moscow), Nick Byrd (CMU), and Neil Thomason (The University of Melbourne), and I have developed a software platform (“Socrates”) that aims to measure participants’ attitudes in “quasi-reflective equilibrium.” Our software groups players according to their prior beliefs about some question (e.g., Does a Gettier-ed agent know that p?), informs them of each other’s views about the question and that they and their fellows will be rewarded if and only if the group converges (in some suitably defined sense) on the “correct” response without detectable cheating. Players then enter discussions which are monitored for cheating by researchers or by further participants. We are now collecting pilot data and plan to release our software alongside preliminary experimental results this year. We hypothesize that our method induces participants to overcome the effects of difficult-to-predict distortions and to apply their concepts correctly. To ensure that we’re not just eliciting WEIRD intuitions, we intend to compare data about seemingly analogous concepts from mono-lingual speakers of other languages, particularly non-IndoEuropean ones.
While our initial targets are concepts studied by philosophers, we expect this work to advance inquiry in fields far beyond experimental philosophy and social psychology.
Games for improving discussions
Most people's attention will drift after 20 minutes of intense concentration, so interspersing discussions with 2-4-minute games can improve learning. In the video below, CMU students in an upper-level seminar on moral psychology take a break by playing "Whoosh!". This game is about spontaneity and attentiveness. It helps students be more aware of their classmates and feel less anxious about contributing to discussions. It's also super fun.
In the first meeting, start with the simplest move, "Whoosh!", which passes the energy along in whatever direction it's traveling. Students should focus on keeping the energy moving smoothly around the circle. Next meeting, introduce "Boing!", which bounces the energy off a player's chest, changing its direction. Then, introduce "Zap!", which zaps the energy to anyone in the circle who can't be reached using one of the other moves. This move keeps everyone on their toes (see what happens when a student takes a glance at her watch in the clip below) and is useful for breaking any "Boing!"-loops that might occur. Once students are comfortable with the basic moves, challenge them to invent new ones and to experiment with motion. This semester, my students invented "Yip!", a move that skips one person; "Yippy", a move that combines two "Yips"; "Ba-Boing", a combination of "Boing!" and "Yip!"; and "Yoink!", a move that intercepts a "Yip!" or "Yippy!" and zaps the energy to another player. It's also fun to play with the volume and speed of the game, for example, by playing silently or in slow-motion.