Tag Archives: Social Criticism

Locke and Land Acknowledgements 

The following is a guest post by Kyle Swan, Professor of Philosophy and Director of Center for Practical and Professional Ethics at CSU Sacramento.


Stuart Reges is suing his employer, the University of Washington, for violating his First Amendment speech rights. The University initiated an investigation into whether Reges violated its anti-harassment policy for publishing a land acknowledgement statement on his course syllabus. His read, 

“I acknowledge that by the labor theory of property the Coast Salish people can claim historical ownership of almost none of the land currently occupied by the University of Washington.” 

Reges is protesting the recommended acknowledgment circulated by the University. The protest is clearly protected speech. I hope Reges wins his suit decisively. 

But what about Reges’s statement? He appears to be serious. In a Quillette article he writes, 

“I am a Georgist, and according to the Georgist worldview, Native Americans have no special claim to any land, just like the rest of us. But since few are familiar with that economic ideology, I leaned instead on a principle described in John Locke’s Second Treatise on Government, now known as the labor theory of property or the ‘homestead principle.’ To the Georgist idea that land is owned in common by all living people, Locke added that by mixing one’s labor with the land, one encloses it from the shared property because people own the products of their labor. If, for example, you make the effort to grow corn on an acre of land, you come to own that acre of land, so long as there is still plenty of land left for others to use.” 

The labor theory Reges refers to is a theory of property acquisition. In its original state, the entire earth is given to us in common. Nobody owns stuff in the world. The question is, how can we remove things from the commons and make rightful claims to them that would allow us then to exclude others from using them? 

Locke provides some conditions. First, it has to be true that someone hasn’t already done that — the stuff has to not be already owned. Second, the person appropriating something from the commons has to do it in a way that improves it through their productive activity — gathering berries, hunting deer, growing vegetables, clearing trees — all kinds of activity counts. Finally, the way they do this has to leave enough and as good for others, so that no one would have reason to complain about the appropriation. 

Professor Reges’s acknowledgment is saying that Coastal Salish people weren’t ever in a position to claim ownership. They were never rightful owners. So when settlers came to the area in the late 1840s or whenever, he supposes these settlers were appropriating the land from the commons, rather than from a group of people. 

Professor Reges’s application of Locke’s theory is dubious. I’m a philosopher, not a historian, but it seems unlikely to me that there were no groups of native people engaged in productive activity in the relevant areas when settlers showed up. 

More importantly, though, if Reges is correct and there weren’t people there already with legitimate ownership claims, then the behavior of government authorities in the mid-19th C was very odd. Because what they were doing was negotiating treaties with the native peoples, including the Salish. Doing so suggests their recognition of legitimate claims made by these groups. Why were they making contracts to acquire land from these native peoples if they didn’t own the land? It seems incredible they would do this if they regarded the lands as unused, unoccupied, and unowned. So it looks like this was a transfer of land ownership rights, not an original appropriation of them. 

Now everything hangs on how these contracts were presented and executed. Were the negotiations above board? Were all the relevant people groups represented? Did they all sign? Were all the terms of the contract fulfilled? Again, I’m a philosopher, not a historian, but if not, if there were problems with the agreement, then there wasn’t a legitimate transfer of the Washington territories. 

If that’s right, then a different part of Locke’s theory applies, which you can find in a later chapter of the 2nd Treatise, Of Conquest. There Locke argues that an aggressor who “unjustly invades another man’s right can…never come to have a right over the conquered…. Should a robber break into my house, and with a dagger at my throat make me seal deeds to convey my estate to him, would this give him any title? Just such a title, by his sword, has an unjust conqueror, who forces me into submission. The injury and the crime is equal, whether committed by the wearer of a crown, or some petty villain. The title of the offender, and the number of his followers, make no difference in the offence, unless it be to aggravate it.” 

And so “the inhabitants of any country who are descended and derive a title to their estates from those who are subdued and had a government forced upon them against their free consents, retain a right to the possession of their ancestors….the first conqueror never having had a title to the land of that country, the people who are the descendants of, or claim under those who were forced to submit to the yoke of a government by constraint, have always a right to shake it off, and free themselves….If it be objected, This would cause endless trouble; I answer, no more than justice does.” 

Locke’s theory of acquisition has two parts. The first is a theory about how original appropriation would be legitimate. The answer has to do with labor and productive activity. But that part doesn’t seem to apply to this case, since it looks like the Salish already had an existing claim. The second part of the theory is about how acquisition by transfer would be legitimate. The answer here has to do with agreement, and everything depends on the quality of the agreement and how it was or wasn’t honored. But we see there’s more to the story. When there has been no agreement, no just transfer and only conquest, Locke says that people retain “the native right of their ancestors.” 

Locke has long been accused of providing intellectual and justificatory cover for the (mis)appropriation of Indigenous people’s land in America and around the world. But it seems like it’s been Locke’s views that have been misappropriated.

Libertarianism and Abortion

I offer this as a tentative foray into a discussion about abortion, obviously spurred by the recent SCOTUS decision, Dobbs v. Jackson.  I note that I have long been convinced that as brilliant as Judith Jarvis Thomson’s contribution to the debate was, it doesn’t actually solve anything. (For more on that, see the chapter Lauren Hall and I co-authored in The Routledge Companion to Libertarianism.)

Different libertarians define their political ideology in different ways.  (No surprise; different egalitarians do this, different socialists do this, different welfare liberals do this; in short, all political ideologies are multiply defined.  Presumably those adopting the same name have at least a family resemblance.)  

Some libertarians adopt the Non-Aggression Principle. Others adopt a view that indicates simply that individual liberty is the predominant value, never set aside to promote any other value. Others accept that natural rights are the foundation for the view. Others adopt some form of consequentialism. My own libertarianism is defined by commitment to the harm principle: no interference with an individual or consensual group is permissible except to rectify or prevent genuine significant harm.

What does this my form of libertarianism say about abortion? If the principle was only about harm to persons, abortion would presumably be clearly permissible since the fetus is not a person even though it is human. Of course, religious libertarians are likely to believe that all human life is sacred and that the intentional ending of such is necessarily wrongful. While I do not believe that, the harm principle in my view is not only about persons or humans. Genuine significant harm can occur to non-humans and merit interference, so whether or not the fetus is a person is not all that matters.

The question then is: is abortion a genuine significant harm? To clarify, I use the term “significant” to indicate that de minimis harms are not the sorts of things we interfere with (the cost of doing so may be a greater loss than the harm itself). I use the term “genuine” to indicate we are not discussing mere hurts or offenses, but hurts that wrongfully set back the interests of another (for more on this, see Feinberg or chapter 3 of my 2018). Once this is recognized, it should be clear that some abortions may well be genuine significant harms and some may not. Aborting an 8 month old fetus merely because one decided on the spur of the moment to take a world tour is, I think, wrongful. It would also be significant—ending the life of a human that could have been very good. On the other hand, aborting a 6 week old fetus because one was raped is unlikely to be wrongful and is at least plausibly less significant since at that stage spontaneous abortions are not uncommon.

Some will now likely object that what is wrongful is subjective. I basically think this is false—it is at least false if meant in anyway that is troubling for what I am saying here. People do not simply decide for themselves what is wrongful.   For more on this, see this BHL post and this one.

Assume I am right thus far: some abortions are genuinely and significantly harmful and some are not. What does that mean for law? On my view, answering this means first recognizing that law is a blunt instrument and as such has to wielded carefully. Perhaps making all abortions illegal after 8 months pregnant is reasonable. Making all abortions illegal is not. If a clear set of guidelines for wrongfulness can be decided upon, perhaps laws against abortions that are wrongful would be reasonable. I can’t here work out what such a list would include, but I do think a law against aborting 8 month old fetuses reasonable. Perhaps also a law against aborting a fetus on a whim (perhaps have a 5 day waiting period). Laws requiring parental (or spousal) consent might sound good but are likely to run up against significant objections, including the real possibility of rape and incest and unacceptable familial pressure. The final list will be difficult to determine and absent a final list, jurisdictions may adopt differing lists (as SCOTUS allows).

Importantly, the jurisdiction issue is more complicated than some recognize. Philosophers have long debated what would give a government legitimate jurisdiction over a group of people. I won’t be able to delve into that here, but will simply assert that I do not believe any of the US state governments is likely to have genuine legitimacy over all people within their borders. For that reason, it strikes me as perfectly acceptable for the federal government or other state governments to aid an abortion-seeker in a state wherein they are unable to get an abortion legally. (For one way this can work, see this interesting story.)

Moralism and Contemporary Politics

People have asked me why I seem so focused on moralism.  There are multiple reasons, including having too much personal experience with people who operate as moralists, but what it really comes down to is that if we take moralism broadly to be a view that we should use the machinery of law to impose a moral view on the jurisdiction, most people in politics today are moralists.  (So, not just a justification of a specific law, but of the whole system of law.  A loss of viewpoint neutrality.)

On the right, we we have what are called “common good constitutionalists” or “common good conservatives” who basically say we should interpret the Constitution of the United States of America in such a way that will get us the common good of society.  Of course, what they mean by “the common good” follows from their conservative beliefs (see Patrick Deneen and Adrian Vermuele).  

On the left, you see basically the same thing without the claim made explicit. You have people pushing a particular view about how to guarantee equality and freedom in society, meaning a particular view about how society should be set up—and of course, that is a way meant to attain their view of the common good.

Of course those on the left and those on the right disagree about what the common good is.  This is what “culture clashes” are. So, for an obvious example, the two camps here would take opposing sides with regard to today’s SCOTUS decision in Dobbs v Jackson Women’s Health Organization.  One side (or at least some on that side) thinks all human life is deserving of the same basic respect as all other human life; the other thinks women deserve the respect that would enable them to control their own lives.

Both sides seem to believe that the machinery of the state—the law—should be used to make society moral, given their own (competing) views about what that entails.   (And we are likely to see this play out from SCOTUS fairly quickly.)

Importantly, libertarians are different.  We believe that people should be free to live their lives as they see fit subject only to the restriction that they don’t wrongfully harm others.  Some might say that this is a form of moralism as well—one wherein the view of morality is simply thinner than those of the other two views.  Perhaps that is right, but consider how it plays out.  Those on the left would want to force people to recognize and work for equal rights for women and to pay for programs meant to help with that.  Those on the right want to force women to carry pregnancies to term.  Meanwhile, libertarians want to force people not to force people to do anything.  That last seems obviously better.

Once more, against moralism in community

Legal moralists worry about the degradation of social norms and community connections. Their worry is that immorality tears at the “fabric of society” where that “fabric,” presumably, is the system of moral beliefs held in common by most people in the community.  Legal moralists are thus happy to impose their own moral views on others with the power of government—they think that this must be done if the norms (and moral beliefs commonly held) are threatened. 

In their willingness to use government power to impose their views of morality, moralists ignore the fact that when a government is empowered to force people to act in certain ways, that power crowds out the ability of individuals to interact freely with one another. That is a problem for their view because if individuals can’t freely choose to act in ways others (including the moralists) think is bad, they also can’t freely choose to act in ways others (again, including the moralists) think is good.  The problem for the moralist, then, is that you can’t have a morally good community if people can’t choose freely—you could at best have a simulacrum of such, more like a collection of automatons than a community of persons.  A morally good community is an association of moral beings—beings that choose for themselves—who (often) freely choose the good.  Putting this a different way, the moralist has to believe you can have a community made top down, forced upon members who are free, but that is impossible.  Community thus has to be made bottom-up; community is made by the individuals within it choosing to interact well together.

This applies, by the way, regardless of the level or size of community.  A condo or homeowners association, for example, can’t be made into a genuine community by fiat—even if those trying to do so take themselves to know (or actually do know!) what is best for everyone.  It simply cannot work—or rather cannot work unless everyone in the group agrees—in which case, it is not top down after all.  

To be clear: if you want to start a genuine community, do so only with people who already agree with you.  (Like, but not necessarily as rigid as, a cult.)  I’d add that if you want the community to remain a community, you’ll need a way to guarantee that all who enter it agree with you in advance.  (Again, like, but not necessarily as rigid as, a cult.) Otherwise, you’ll face opposition from some of the newcomers—different ideas about what the community should be.  And those ideas from newcomers (at least those who enter justly), will have just as much claim to be legitimate as yours.  Denying that entails not community, but moralist dictatorship.

What happened?

It’s a bad week. Polarization has lead to a federal truth commission (thank you Dems) and the likely removal of federal protection for reproductive freedom (thank you Reps). Neither of these, so far as we know, is popular. A working democracy of Americans would be unlikely to bring about either. But we don’t seem to have that—or at least not to the extent that we might have thought. In part, this is because of the way discourse in our society has deteriorated. Discourse in our society is, to say the least, strained.

Given how strained our discourse has become, some would prefer to have less of it, walking away from those they disagree with and encouraging others to do the same. In Choosing Civility, P.M. Forni, cofounder of the Johns Hopkins Civility Project, finds it encouraging that roughly 56 percent of Americans seem to believe it “better for people to have good manners” than to “express what they really think” (76) and claims that civility suggests meals are “not the best venue for political debate” (79). On my view, by contrast, people too frequently censor themselves rather than engage in conversation with someone they think wrong about an issue. I think this horribly unfortunate, even if understandable. I think it is understandable because of the way many of us are raised. I think it unfortunate because it leads predictably to a loss of discourse that would promote a more civil society. When people don’t engage in civil discourse with each other, it’s too easy for people to live in ideological bubbles, too likely that people will be unable to even engage with those they disagree with, and too easy for those with power to ignore the wishes of the rest. I want to suggest one cause and possible corrective of this situation.

As children, when we visit extended family or friends, many of us are told not to mention religion or politics, Uncle Bill’s drinking, Aunt Suzie’s time in prison, or any number of other family “secrets” or disagreements. Those subject to these parental restrictions learn not to discuss anything controversial, including serious social issues and our own values. The lesson many seem to take from this is that it is impolite and disrespectful to disagree with others. It is hard for me to think this has not contributed to the polarization and rancor in our society. Because we are trained, from an early age, to censor ourselves and repress conversation about a wide array of topics, it’s not surprising that many are shocked when someone disagrees with them—we are taught not to disagree or even suggest a topic of conversation about which there is likely to be disagreement, so people are naturally surprised when others do precisely that. They think it rude. Given the surprise, moreover, many make no attempt to provide a reasoned response to someone who says something they disagree with or find distasteful. This is a mistake.

The problem may be worse than simple parental limits. As a culture, we seem committed to social separation. Not only do we actively and explicitly discourage children from having honest conversations (which join us with others), but we also seek to set up our lives so that we have more distance from each other—even our immediate family members. People complain about the rising cost of homes, but in real dollars, the cost per square foot of a home has not increased that much (see this). Home costs have increased largely because we insist on larger homes—homes where we have our own bathrooms, our own bedrooms, our own offices. With all of that space, we are away from our loved ones, leaving us able to avoid difficult conversations with even our closest intimates. We don’t have to negotiate for time in the shower, for use of the television, or much of anything else. We don’t have to discuss things we disagree about. (And, of course, Americans tend to think that once a child graduates from high school they ought to move out—again, allowing that those almost-adult children can avoid dealing with their parents, learning how to deal with them when they disagree. And when they “talk,” they now do so by texting—furthering the distance from what would be allowed by face to face, or at least, phone, conversations.) In all, we insist on and get more—more space, more privacy, more isolation. We also sort ourselves—moving to neighborhoods and jobs where others that agree with us live and work. We spend less and less time with people we disagree with And then we are surprised that we don’t know how to deal with such people.

So much for the social criticism. That is, I submit, one of the causes of our current lack of civil discourse (and thus increased polarization). If that is right, the solution should be straightforward: stop taking steps that discourage children from engaging in honest discussion. Make children share a bathroom so that they at least have to negotiate its use with a sibling. Maybe have them share a bedroom too! Really importantly, stop telling children not to discuss certain topics with others. Let them learn from others, let others learn from them. (And obviously, those of us teaching in college should seek to promote discussion of ideologically diverse views, even views that some find offensive.) We need to be offended when young so that we don’t refuse to engage with others we find offensive when we are adults. We would then be prepared for honest civil discourse.

Why ‘ProSocial Libertarians’?

I am wary of isms and labels. They are used too often by too many as excuses to stop thinking. Worse, no doubt aware of the human tendency to avoid ideas that challenge our preconceptions, unscrupulous advocates on all sides use labels such as ‘socialism’ or ‘far right’ to pillory views with which they disagree, in effect saying ‘These ideas are beyond the pale. You can ignore them.’ This, in turn, further discourages people from venturing outside the safety of their thought bubbles and trying to understand why others might hold different views. 

Although I am quite sensitive to this thought-stultifying use of labels — having taught critical thinking for years — I am sure I am not the only person for whom effectively labeling something as beyond the pale piques one’s curiosity instead of squelching it. (This, by the way, is the main reason — along with my name — that I first read Ayn Rand.) So, fortunately, there are also people who want to be challenged and seek out ideas that put their preconceptions under strain. If you fall into this group, you should enjoy this blog.

Despite the risks that labels bring, we cannot manage without them. To minimize the risks, we should acknowledge that labels are only a starting point for discussion and that the meaning of any politically interesting terms will need to be clarified on an ongoing basis. 

In light of all this, if I had to choose a label that best captures my political orientation, that label would be ‘libertarian’. I found it dismaying, then, during the COVID-19 pandemic, to see the term ‘libertarian’ — as well as related terms like ‘freedom’ — arrogated by a rogue’s gallery of activists and politicians who have been called — with some justification —antisocial. 

What was dismaying was that these so-called ‘libertarians’ were acting out an old, muddleheaded conception of libertarianism that many people could (wrongly) take as reason to dismiss libertarian ideas as unworthy of serious consideration. For, according to this old, muddleheaded conception, libertarians just _are_ antisocial. Like Randy Weaver, libertarians on this conception want nothing more than to be left alone and they will happily head to the woods with their guns and family to achieve this end. Properly understood, however, libertarians need not be Randy Weavers. Or, at least, so I believe. (Please note: In no way do I intend for my use of Randy Weaver as an example of an antisocial libertarian to diminish the tragedy and injustice that befell him and his family at the hands of the United States government.)

Given what the honest use of labels requires, I want to be as clear as I can about what I mean by ‘libertarian’ and why being a libertarian involves being prosocial, not antisocial. But there is no such thing as a conceptual dictator, so any work towards understanding libertarianism will, of necessity, be a joint enterprise. Hence, the idea of this blog: a civil forum for exploring what it means to be a libertarian and the ways being a libertarian involves being prosocial. Hence also, our name: ProSocial Libertarians.

Discourse and Attendance in College Classes

Many of my posts on RCL have been about discourse. None has been directly about discourse in classrooms, but I do try to make my classes sites of civil discourse. This is both because student dialogue is what makes the classroom fun and exciting for me and because I believe it is an essential part of college. (See this.). The discourse that occurs in classrooms and elsewhere on college campuses is an invaluable part of the college experience.

As I’ve discussed previously, I think there are 2 basic reasons to engage in discourse: to maintain or nourish a relationship or to convey information. (See here.) In college classrooms, I will simply assume, the latter reason is paramount. That is also hugely important elsewhere on college campuses—students learn a lot from each other—but the first reason is also hugely important as students make connections with others, some of whom will be life long friends and some of whom will be business associates.

This post is primarily about classrooms, so it’s the conveying of information that is relevant here. In particular, its what is relevant when asking whether attendance should be required in college classes. My own view about this has changed over the years. In the past, I’ve marked people down for poor attendance or multiple tardies or made class participation—for which attendance is a necessary prerequisite—a separate and substantial part of students’ grades. At a certain point, though, colleagues convinced me that making participation a part of a student’s grade was unfair to those students who have significant psychological issues with speaking in class. At first, I responded to that by allowing the “participation” to be outside of class—either in office visits or email. Eventually, I dropped it as a requirement and instead made it only a way to improve one’s grade. I’ve never stopped believing, though, in the importance of attending and participating in class.

Over the years, I’ve had students approach me about taking a class without attending. Some had very good reason they could not attend courses during the day when the course was offered—needing to work full time to support their family, for example. My standard reply was always something like “no, attendance is required” or “you can’t pass this class without attending, so no.” More recently, I have been questioning the wisdom of that. The issue has to involve consideration of the sort of information that is conveyed in classes.

As a philosopher, I am not at all concerned that students learn biographical facts about philosophers and only somewhat concerned that students learn even basic facts about different theories. My main concern is in getting students to see how to do philosophy. What that means is that I want students to learn how to think clearly, check assumptions, make valid inferences, and engage in both verbal and written discourse about arguments and their premises, inferential moves, and conclusions. I want to convey to them how to do this well.

Given what I want the students to get out of my classes, my question becomes “is attendance necessary for students to think clearly, check assumptions, make valid inferences, and engage in both verbal and written discourse about arguments and their premises, inferential moves, and conclusions?” Another way to ask the question is to ask: “do individual learners need professors to learn how to do those things?” I think most do.

Classically, education has three stages: grammar, logic, rhetoric. I prefer to think of these in terms of mimesis, analysis, synthesis. The idea is that young children must memorize information, imitating language and such, and until they have some minimum amount of knowledge, they can’t be expected to do anything else. Once they have that, though, they can move on to the second stage wherein they can use logic to analyze things, figuring out what goes where and why. They can even question—analyze—the bits of information they previously learned. Only with mastery of analysis can they move on to the third stage wherein they can make something new, synthesizing something from the parts of what they have analyzed.

Teachers are clearly needed for mimesis—someone has to provide the student what it is that should be learned (memorized, imitated). Perhaps teachers are also needed for the beginnings of the second stage, pointing students in the right direction as they begin to do logical analysis. One needs to understand basic rules of deductive logic to do analysis well and I suspect most of us need someone to teach us that. But does everyone? Frankly, I doubt it though I suppose how much teachers are needed here will depend on how much of logic is innate to our reasoning abilities. It seems even less likely that teachers are necessary for the third stage, though clearly someone to give us direction can be useful and I think it likely that most of us learn best in dialogue with others. If that is right, attendance in class would clearly be useful. So perhaps that is the answer to my question: most people need direction, they can only get that in class, so attendance should be required.

What, though, if some want to learn without professors? Some certainly can do so. Whether they should be allowed to do so when in college is another question. After all, if they are able to do so, why should they enroll in college at all? If they do enroll, the college can simply say “you are enrolling here and that means accepting that we know best how you will learn (or at least recognizing that we get to decide), and we deem it necessary for you to attend courses.”

Some will no doubt think that the sort of view just attributed to a college is overly paternalistic. On the other hand, some people will be unfortunately wrong when they think they can teach themselves collegiate level material. Some people, after all, read great books and completely misunderstand them. I have met people who thought themselves erudite for reading Hegel, Nietzsche, Marx and others, but whose comprehension of those authors was abysmal. Such people would be well served by a policy requiring course attendance. Without it, they would lack comprehension and thus do poorly on any assessments.

Still, presumably some can read those materials and do well. (In other systems, after all, attending classes is—was?—not expected; one studies a set of materials and then is examined.) Others might not do well, but do well enough for their purposes. They may, that is, only want some knowledge (or some level of skill)—happy to have read the texts even if their comprehension is limited—and be happy to get a C or C- in a course. They may have reason to want a college degree independent of learning well. (In our society, after all, many seem only to go to college to get a degree to signal to employers that they are worth hiring. It’s hard to blame them for this given how our society works.)

So a student may have good reason to enroll in a college, register for a course, and not attend. But what should we think about this and what should professors do? Some professors, of course, may be annoyed or insulted if students are apparently unconcerned to attend regularly or show up on time. I was in the past, but no longer am. I still, though, have a hard time tolerating feigned surprise at grades from students who obviously did not prioritize the class. I would prefer a student who says “its not worth my coming to class; I’ll just try to pass without doing so” to one who lies about how hard they are trying to do the work. Frankly, I am coming to think that if they pass, the former simply deserve congratulations. (If they don’t pass, they can’t expect me to be upset. I can root for their passing, without being surprised if they don’t.) But, honestly, I’d be hugely surprised if they did at all well without attending. That is the main concern—the best pedagogy.

Why would I be surprised if a non-attending student passed? Frankly, I think that the vast majority of people learn better in a class with a professor than they can without. If nothing else, in philosophy and other humanities classes, they learn something very important—how to engage in good civil, honest, and productive discourse. That does affect how they perform on exams and papers. What I expect in all of the writing my students do—whether on a short essay exam, longer essay exams, or papers—is well-written and well thought out, honest and civil responses to whatever prompt is provided. I want them to do philosophy after all, not sophistry or fluff. Attending class means being in an environment designed to help them learn. If they participate as I hope they do, they can also help improve that environment. That makes for better outcomes for all in the class. Even if they don’t participate—and, again, I realize doing so is honestly hard for some students—they are likely to do better simply because they hear the sort of discourse I seek to promote. If they hear others practicing good discourse, they are likely to pick up on what it is. Attendance helps.

The whole point of classes is that for most students, they promote learning—for those attending. Why, then, would someone want to register for a class if they don’t plan to attend? One answer is that the current system mainly doesn’t allow them to get the credentials of college without doing so. Mainly. We do have fully asynchronous online classes for which one does the work on one’s own time so long as one completes it by the required deadlines, including finishing it all by the end of a semester. (But why insist on a time limit?)

While we don’t have a system conducive to students not registering for classes and yet getting credentialed, that isn’t reason to require attendance in the classes we offer. Perhaps we ought to make it possible for students to take a syllabus, learn the material on their own, and sit for an exam when they feel themselves ready, without imposing a schedule on them. If they pass, great. If not, perhaps they try actually taking the class (i.e., including attending). That may be what we should do. Until then, some of us will require attendance and some will not.

Open for comments and discussion. What do others think?

The World is Not a Therapy Session

Braver Angels does fantastic work helping people improve conversations with those they have significant and stress-inducing disagreements with so that they can gain greater mutual understanding of each other, thereby reducing polarization. It seems to work. As I noted earlier, though, the desire to maintain or improve one’s relationships with others is only one of the two main reasons we engage in discourse. The other is to exchange information, both “teaching” and “learning.” As I noted in that previous post, I worry about the “truth deficit” likely to emerge if we stress mutual understanding (of each other rather than of each other’s views). Here, I’ll discuss this a bit further.

What is encouraged in Braver Angels’ workshops is active listening, where one attends to what the other says, providing non-verbal clues of interest, along with reflecting back to the other what they said. In a therapeutic setting, reflecting back to another what they said can be incredibly useful. “People like having their thoughts and feelings reflected back to them” (Tania Israel, page 51) and so increases their comfort level when in therapy, thereby allowing them to open up. For therapeutic purposes, it seems really quite useful. Nonetheless, I have long been uncomfortable with it in other settings.

I had a date once with a woman who, throughout dinner, reflected back to me what I had said. It so threw me off that I didn’t really know what to make of it. I don’t recall how long it took for me to realize that she might have resorted to the tactic because she found what I was saying antithetical to her own views (I don’t recall what we were discussing). I’ll never know for sure as I found it so distasteful that I never saw her again. If the same thing would’ve happened today, I’d probably ask why she was doing it, but I suspect there are others who would do as I did and walk away. (I don’t deny, of course, that others appreciate it.)

Again, the technique has value—there is good evidence that it helps people feel comfortable which can be useful both in developing relationships and in therapy situations (see Israel footnotes 5 and 6 on page 74). Importantly, though, the world is not a therapy session and sometimes what matters is exchanging information, not (or not merely) developing a relationship. Put another way, while it’s true that we sometimes want to develop a relationship and learn about the person, other times we want to figure out the truth about a topic and are less willing to except the truth deficit. If we are trying to persuade someone to change their views about abortion, capitalism, gun control, immigration, schools, welfare rights, or any number of other contentious topics, we might want to know more about our interlocutor, but we also just want to persuade—or be persuaded. (Part of why we want to know who they are is to determine how we might persuade them!)

To be clear, when we are engaging in a serious discussion with someone about an issue we disagree about, we should be open to the possibility that the view we start the conversation with is mistaken and that we can thus learn from our interlocutor. Of course, when we start, we will believe we are right and hope to teach (persuade) the other, but we have to know we can be wrong. We should also be open to the possibility that neither of us is right and there is some third position (perhaps somewhere between our view and theirs, perhaps not) that is better still. What is important in these cases, though, is figuring out the truth about the issue (or getting closer to it). We shouldn’t give that up lightly.

Getting to the truth may, in some instances, be aided by reflecting to each other what we’ve said. Obviously, if we do not understand what our interlocutor has said we should ask them to explain. Sometimes we simply need some clarification. We need to know, after all, that we are actually talking about the same thing and we need to understand where our views overlap and where they do not. Sometimes, also, we might ask someone to repeat what they say in different words to make sure we understand; we might also do it for them (common in teaching). But if reflecting back to each other is used for other reasons (making the other feel comfortable, for example), I wonder how far it goes. It seems to me that we need to challenge each other. Sometimes, we may even need to be abrasive—or to have others be abrasive toward us. This can help us improve our own views. (For more on that see Emily Chamlee-Wright’s article on the topic here, as well as my response. See also Hrishikesh Joshi’s Why Its OK to Speak Your Mind.)

In short, it seems to me that in normal discourse with someone with whom we disagree, we ought to be at least as concerned with determining the best view as we are with making each other comfortable. Making each other comfortable is important, but perhaps primarily as a precursor to honest conversation. If I say, for example, that “I believe we should have completely open economic borders, perhaps just keeping out known criminals” and you reply “let me be sure I understand; you think we should not stop anyone from coming into the country (perhaps unless they are criminals in their own country) even if it means they take our jobs, push for an end to Judeo-Christianity, and bring in drugs,” I am likely to skip over the first part—which strikes me as unnecessary and vaguely insulting—and move on to the latter claims, which I think are all mistakes. I might think “OK, so they wanted to be clear” or “OK, they wanted time to gather their thoughts,” but if it becomes a regular part of the conversation, I am less likely to continue engaging (and, frankly, less likely to trust the other). I may even wonder why why people approach all of life as if it’s a therapy session.

Three News Items to Rally Around

Since I spend a good bit of my time thinking about polarization and ways to combat it, I thought I would bring attention to three recent news items that should help reduce polarization but seem to mostly go unnoticed.

First, there is this from WaPo 10/24/2021, about a police chief in a town in Georgia, seeking to have police officers shoot to incapacitate rather than to kill (so, shooting in the legs or abdomen, for example, instead of the chest).  Of course, it would be best if no one had to be shot at all, but those that (rightly) complain about police violence should be embracing this as an improvement as it would presumably mean fewer killings by police.  And those who worry endlessly about “law and order” would seem to have to choose between that and saying “yeah, we don’t mind it if the police kill people.”  Since the latter would likely be seen as including some nefarious beliefs, it’s hard to imagine why they, too, wouldn’t embrace it.

Second, from NYT 11/3/2021, is a short about a Swiss company literally taking CO2 out of the air and making soda with it. Why everyone isn’t talking about this ecstatically is beyond me. I know folks on the (pretty far) left who worry endlessly about global warming and claim we have to stop this and stop that to at least slow it down before we all die. I know folks on the (pretty far) right who claim, more or less, that global warming is fake news. Either way, this should be good news. If global warming is fake, then this sort of technological advancement may be uninteresting in the long run—but those on the right should be happy to say “OK, we know you’re worried, why don’t you invest in this to help?” If its not fake news (fwiw, it’s not), this may be the way to save us and the planet. Those on the left (assuming they don’t want simply to be victims and keep fighting about “green new deal” sort of regulations) should be embracing the possibilities, declaring “yes, we need more of this as a good way forward without killing the economy and making everyone worse off.”

Finally, from Axios 11/5/2021, is a story on the jobs report.  In a nutshell, “America has now recovered 80% of the jobs lost at the depth of the recession in 2020. … Wages are still rising: Average hourly earnings rose another 11 cents an hour in October, to $30.96. That’s enough to keep up with inflation.”  I know that some question the specific numbers.  That’s no surprise.  What is surprising (even given how bad Dems usually are on messaging) is that Biden and the Dems haven’t been touting this at every chance.  It should please Reps a well except that it may make some swing voters less likely to go to their side.  

The above three stories are pretty clearly good news for everyone.   The third is perhaps better for Dems than Reps, but somehow they haven’t decided to hype it up or use it as a way to convince moderate legislators or voters to help them.  The first and second are good for everyone.  Yet it doesn’t seem like many are talking about any of the three.  It’s almost as if both sides of our political divide want to remain divided.  And to alienate those of us who refuse to take either side.  Or perhaps they want to clearly demonstrate that neither side should be taken seriously and it’s high time for a party to emerge in the middle. 

The “middle” here might be interesting.  What party consistently opposes state coercion and force against civilians?  What party consistently opposes the state looking the other way when negative externalities become worse and worse?  What party consistently favors policies that grow the economy so that all will do better?  There is such a party, even if it has its own problems.

Vaccines, Science, Judgement, & Discourse

My very first entry into this blog—back on July 2, 2020—was about wearing face coverings because of Covid. That was fairly early into the pandemic, but I think the post has aged very well and I still stand by it.  It seems clear that when there are many cases of a serious new infection, people should wear masks if they go into an enclosed space with lots of unknown others. I also think, though, that it would be wrong to have government mandates requiring that people wear masks (except in places, like nursing homes, where the occupants would be at a known and significant risk) and that private businesses should decide the policy for their brick and mortar operations, just as individuals should decide the policy for their homes.  There is nothing inconsistent in any of that.

Similarly, it seems to me that everybody who can, should want to be inoculated against serious infections (having had the actual infection is likely sufficient). Again, that doesn’t mean that it should be government mandated. (I’m so pro-choice, I think people should be able to choose things that are bad and foolish; I don’t think they should be able to choose things that clearly cause harms to others, but neither the vaccine nor its rejection by an individual does that, so far as I can tell.) We shouldn’t need government mandates to encourage us to follow the science.  So let’s discuss that.  

Acetylsalicylic Acid alleviates headaches, fevers, and other pains.  I don’t know how that works.  Here’s a guess: the acid kills the nerves that are firing.  I actually doubt there is any accuracy in that guess at all, but it doesn’t matter.  I don’t need to know how aspirin works.  I know it works and is generally safe so I use it. How do I know this?  It’s been well tested, both by scientists and by tremendous numbers of people throughout the world.

Now, I actually think I have a better sense of how vaccines work than how aspirin works, though I doubt that holds for the new mRNA vaccines and I realize I could be wrong.  Again it doesn’t really matter.  I’ll use them nonetheless—and for the same reason. The fact is that most of the time, most or all of us simply trust in science.  We use elevators, escalators, cars, planes, trains, clothing with new-fangled fabrics, shoes with new-fangled rubber, foods with all sorts of odd new additives, etc.—all of which were developed with science.  And we don’t usually let that bother us.  

What seems to me foolish in standard vaccine refusal is roughly the same as what seems foolish to me in opposition to using the insecticide DEET in areas where mosquitoes carry malaria, which kills many people. It’s true that the DEET causes some significant problems, but it is unlikely that those problems are worse than the many deaths that would result without it.  This seems clear just based on historical use of the chemical. Similarly, vaccines may cause some problems but the (recent) historical use suggests pretty clearly that they save lives.

Of course, there are always mistakes.  Science is constantly evolving—it is more of a process, after all, than a single state of knowledge.  Scientists make mistakes.  Worse, sometimes scientists bend to their desires and sometimes industries have enough financial power to change the way science is presented. (Looking at you, sugar Industry!) Given that and a personal distrust of government, I certainly understand when people want to wait for evidence to settle.

A drug or other scientific advancement used too early may well turn out to be more problematic than its worth.  But aspirin has been well tested.  And vaccines have been well tested.  Even the recent Covid vaccines have been well tested.  The fact is you are far more likely to die from Covid if you are unvaccinated than if you are.  Granted, the odds of dying either way are thankfully slim for most of us.  But what people are now faced with is a free and easy way to avoid (a small chance of) death.  Admittedly, it’s possible that in 20 years we’ll learn that these new vaccines cause cancer or such.  But scientific advancement will continue and the fight against cancer is already far better than it was any time in the past.  So the option is between a free and easy way to avoid a chance of death or serious illness now combined with some chance of added problem later that we may know how to deal with and, well, not avoiding that.  Maybe this is a judgement call, but the former seems pretty clearly the better option in standard cases.  (Other downsides, so far as I can tell, are mostly fictitious.  If you’re worried about a computer chip embedded in the vaccine, for example, realize you could have had one put in you when you were born.)

About it being a judgement call. Consider using a GPS.  Some people just slavishly listen to the directions from their GPS. Unfortunately, this can have pretty bad results.  Other people refuse to use a GPS at all, perhaps thinking they have to do it on their own. For me, the GPS (in my phone) is a tool that is helpful to get where I need to go when I can’t really remember all the directions well or simply don’t trust my ability to do so. Still, I listen to the GPS and sometimes override its directions, for example, if I think it’s going in an unsafe way or a way that’s likely to cause more problems.  Here too, judgment is needed.

Unfortunately, we all seem to think we individually have great judgment even though it’s obvious that not all of us do.  Or perhaps better, none of us do all of the time.  Sometimes one has to recognize that we have to trust others to know better than we do.  

So, what should we do?  We should each try to be honest with ourselves about whether our judgment is likely to be better than those telling us to do other than we would choose. We should listen to people who are actually able to consider all of the relevant evidence.  Because it’s unlikely that any single source of information will always be completely trustworthy, we should likely listen to variety of generally trustworthy sources. 

We need to find people we can rely on—mentors or people recognized as experts in the relevant field—and take their views seriously.  This may simply push the problem back a step: those whose judgment lead them to make bad choices may simply choose to listen to other people with similarly bad judgement.  That is a real problem worth further investigation.  My only suggestion here is to trust those who are leading good lives and who have the trust of their professional peers.  I don’t pretend that is sufficient, but can’t say more here except to note that we can only hope to get better decisions, for ourselves and others, if we have better discussions.  To that end, see this postAlso, realize that if people would in fact standardly make better decisions (in part by having better discussions prior to making decisions), there would be less call for government intervention.  Indeed, if we had better conversations across the board, we would have less people wanting government intervention.  Realizing that those who have suffered through COVID are inoculated, for example, should stop others from trying to pressure them to get vaccinated.


Thanks to Lauren Hall, Connor Kianpour, and JP Messina for suggesting ays to improve this post.