Category Archives: Discourse

Locke and Land Acknowledgements 

The following is a guest post by Kyle Swan, Professor of Philosophy and Director of Center for Practical and Professional Ethics at CSU Sacramento.


Stuart Reges is suing his employer, the University of Washington, for violating his First Amendment speech rights. The University initiated an investigation into whether Reges violated its anti-harassment policy for publishing a land acknowledgement statement on his course syllabus. His read, 

“I acknowledge that by the labor theory of property the Coast Salish people can claim historical ownership of almost none of the land currently occupied by the University of Washington.” 

Reges is protesting the recommended acknowledgment circulated by the University. The protest is clearly protected speech. I hope Reges wins his suit decisively. 

But what about Reges’s statement? He appears to be serious. In a Quillette article he writes, 

“I am a Georgist, and according to the Georgist worldview, Native Americans have no special claim to any land, just like the rest of us. But since few are familiar with that economic ideology, I leaned instead on a principle described in John Locke’s Second Treatise on Government, now known as the labor theory of property or the ‘homestead principle.’ To the Georgist idea that land is owned in common by all living people, Locke added that by mixing one’s labor with the land, one encloses it from the shared property because people own the products of their labor. If, for example, you make the effort to grow corn on an acre of land, you come to own that acre of land, so long as there is still plenty of land left for others to use.” 

The labor theory Reges refers to is a theory of property acquisition. In its original state, the entire earth is given to us in common. Nobody owns stuff in the world. The question is, how can we remove things from the commons and make rightful claims to them that would allow us then to exclude others from using them? 

Locke provides some conditions. First, it has to be true that someone hasn’t already done that — the stuff has to not be already owned. Second, the person appropriating something from the commons has to do it in a way that improves it through their productive activity — gathering berries, hunting deer, growing vegetables, clearing trees — all kinds of activity counts. Finally, the way they do this has to leave enough and as good for others, so that no one would have reason to complain about the appropriation. 

Professor Reges’s acknowledgment is saying that Coastal Salish people weren’t ever in a position to claim ownership. They were never rightful owners. So when settlers came to the area in the late 1840s or whenever, he supposes these settlers were appropriating the land from the commons, rather than from a group of people. 

Professor Reges’s application of Locke’s theory is dubious. I’m a philosopher, not a historian, but it seems unlikely to me that there were no groups of native people engaged in productive activity in the relevant areas when settlers showed up. 

More importantly, though, if Reges is correct and there weren’t people there already with legitimate ownership claims, then the behavior of government authorities in the mid-19th C was very odd. Because what they were doing was negotiating treaties with the native peoples, including the Salish. Doing so suggests their recognition of legitimate claims made by these groups. Why were they making contracts to acquire land from these native peoples if they didn’t own the land? It seems incredible they would do this if they regarded the lands as unused, unoccupied, and unowned. So it looks like this was a transfer of land ownership rights, not an original appropriation of them. 

Now everything hangs on how these contracts were presented and executed. Were the negotiations above board? Were all the relevant people groups represented? Did they all sign? Were all the terms of the contract fulfilled? Again, I’m a philosopher, not a historian, but if not, if there were problems with the agreement, then there wasn’t a legitimate transfer of the Washington territories. 

If that’s right, then a different part of Locke’s theory applies, which you can find in a later chapter of the 2nd Treatise, Of Conquest. There Locke argues that an aggressor who “unjustly invades another man’s right can…never come to have a right over the conquered…. Should a robber break into my house, and with a dagger at my throat make me seal deeds to convey my estate to him, would this give him any title? Just such a title, by his sword, has an unjust conqueror, who forces me into submission. The injury and the crime is equal, whether committed by the wearer of a crown, or some petty villain. The title of the offender, and the number of his followers, make no difference in the offence, unless it be to aggravate it.” 

And so “the inhabitants of any country who are descended and derive a title to their estates from those who are subdued and had a government forced upon them against their free consents, retain a right to the possession of their ancestors….the first conqueror never having had a title to the land of that country, the people who are the descendants of, or claim under those who were forced to submit to the yoke of a government by constraint, have always a right to shake it off, and free themselves….If it be objected, This would cause endless trouble; I answer, no more than justice does.” 

Locke’s theory of acquisition has two parts. The first is a theory about how original appropriation would be legitimate. The answer has to do with labor and productive activity. But that part doesn’t seem to apply to this case, since it looks like the Salish already had an existing claim. The second part of the theory is about how acquisition by transfer would be legitimate. The answer here has to do with agreement, and everything depends on the quality of the agreement and how it was or wasn’t honored. But we see there’s more to the story. When there has been no agreement, no just transfer and only conquest, Locke says that people retain “the native right of their ancestors.” 

Locke has long been accused of providing intellectual and justificatory cover for the (mis)appropriation of Indigenous people’s land in America and around the world. But it seems like it’s been Locke’s views that have been misappropriated.

Moralism and Contemporary Politics

People have asked me why I seem so focused on moralism.  There are multiple reasons, including having too much personal experience with people who operate as moralists, but what it really comes down to is that if we take moralism broadly to be a view that we should use the machinery of law to impose a moral view on the jurisdiction, most people in politics today are moralists.  (So, not just a justification of a specific law, but of the whole system of law.  A loss of viewpoint neutrality.)

On the right, we we have what are called “common good constitutionalists” or “common good conservatives” who basically say we should interpret the Constitution of the United States of America in such a way that will get us the common good of society.  Of course, what they mean by “the common good” follows from their conservative beliefs (see Patrick Deneen and Adrian Vermuele).  

On the left, you see basically the same thing without the claim made explicit. You have people pushing a particular view about how to guarantee equality and freedom in society, meaning a particular view about how society should be set up—and of course, that is a way meant to attain their view of the common good.

Of course those on the left and those on the right disagree about what the common good is.  This is what “culture clashes” are. So, for an obvious example, the two camps here would take opposing sides with regard to today’s SCOTUS decision in Dobbs v Jackson Women’s Health Organization.  One side (or at least some on that side) thinks all human life is deserving of the same basic respect as all other human life; the other thinks women deserve the respect that would enable them to control their own lives.

Both sides seem to believe that the machinery of the state—the law—should be used to make society moral, given their own (competing) views about what that entails.   (And we are likely to see this play out from SCOTUS fairly quickly.)

Importantly, libertarians are different.  We believe that people should be free to live their lives as they see fit subject only to the restriction that they don’t wrongfully harm others.  Some might say that this is a form of moralism as well—one wherein the view of morality is simply thinner than those of the other two views.  Perhaps that is right, but consider how it plays out.  Those on the left would want to force people to recognize and work for equal rights for women and to pay for programs meant to help with that.  Those on the right want to force women to carry pregnancies to term.  Meanwhile, libertarians want to force people not to force people to do anything.  That last seems obviously better.

Interpretive Charity and Heated Debate

I wanted to add to the discussion my co-bloggers have started on discourse norms.

Consider the following sample dialogues

1)
A: “I think stricter gun regulations would fail to prevent either determined criminals or the seriously deranged from committing the sorts of horrible crimes that make people want those stricter laws, but they would violate the rights of law-abiding gun owners and possibly make them less safe.”

B: “I think you’re mistaken about that.  Just as criminal background checks make good sense, so would some sort of red-flag or mental health history screening.  Indeed, since we already use criminal background checks, we could easily combine the two, plus it would help if there weren’t easy ways to circumvent the background checks.”

2)
A: “ I think stricter gun regulations would fail to prevent either determined criminals or the seriously deranged from committing the sorts of horrible crimes that make people want those stricter laws, but they would violate the rights of law-abiding gun owners and possibly make them less safe.”

B*: “That’s outrageous.  You think guns are more important than kids’ lives?”

3)
A: “I think there’s no plausible rationale for tighter abortion restrictions.  Claiming that life begins at conception is a religious doctrine, so using it as the basis for law would violate church-state separation.  In many religions, personhood isn’t thought to obtain until at least the 2nd trimester.   In any case, there are all sorts of reasons a woman might seek to terminate a pregnancy, medical ones most obviously, but also psychological reasons, and I think the best public policy would be to leave it up to her.”

B: “I disagree.  I am not depending on any particular religious doctrine when I claim that human life begins at conception. It’s a developmental spectrum, there are no sharp dividing lines, so if we don’t respect the new life that the pregnancy represents as early as possible, it’s a slippery slope.  As to the reasons why women might want to terminate, sure, if there’s a legitimate medical rationale that the mother’s life is in jeopardy, I can see that, but I think a lot of what you’re calling psychological reasons could be addressed through counseling, spiritual or secular.”

4)
A: “I think there’s no plausible rationale for tighter abortion restrictions.  Claiming that life begins at conception is a religious doctrine, so using it as the basis for law would violate church-state separation.  In many religions, personhood isn’t thought to obtain until at least the 2nd trimester.   In any case, there are all sorts of reasons a woman might seek to terminate a pregnancy, medical ones most obviously, but also psychological reasons, and I think the best public policy would be to leave it up to her.”

B*: “That’s outrageous. You think it’s ok to murder babies to preserve some illusion of women’s autonomy?”

————————

You may have noticed that dialogues (1) and (3) read very differently than (2) and (4).   That’s because in (1) and (3), the B character is responding to the A character’s arguments with different arguments.   In (2) and (4), the B* character does not actually engage with A’s arguments at all, but goes right for “baby-killer.”   Regardless of your view on either abortion or gun control, you should be able to see that in (2) and (4), B* is not arguing in a rational way, whereas B is arguing rationally in (1) and (3).   Does A actually hold the position that B* alleges?  Almost certainly not.  This is commonly known as the “straw man” fallacy; in this case augmented by an emotional appeal.  We know this because in the other pair of dialogues, B is offering actual counter-arguments. 

Why does this matter?   Because when people argue about these things, they have two sorts of objectives.  One is changing the mind of the other person, or perhaps onlookers.  The other is changing public policy.   But neither of these goals will be served with arguments that do not engage their opponents.  It’s totally implausible that A will respond to B* with “oh goodness, I didn’t realize I was advocating baby-killing, I hereby change my position.”  What happens instead is the discussion goes nowhere.

Sometimes we just get angry at people who disagree with us, and we are bewildered that others don’t see things our way.  But we should resist the temptation to straw-man.  If you don’t have the emotional bandwidth to argue with people, you’re certainly not required to do so.  But if you do think it’s worth arguing about, then your objectives will be better served with interpretive charity.  What actually is the other person’s position, and why?  Why do they think your position is wrong?  Is there something that might be common ground?  Are you talking past each other?  Are you sure that your position is informed by facts and logic?  Do you have any talking points that might be misinformed?

Sometimes we never resolve disagreements on highly controversial issues.  But if you hope to get anywhere, there’s a right way and a wrong way to do it.

What happened?

It’s a bad week. Polarization has lead to a federal truth commission (thank you Dems) and the likely removal of federal protection for reproductive freedom (thank you Reps). Neither of these, so far as we know, is popular. A working democracy of Americans would be unlikely to bring about either. But we don’t seem to have that—or at least not to the extent that we might have thought. In part, this is because of the way discourse in our society has deteriorated. Discourse in our society is, to say the least, strained.

Given how strained our discourse has become, some would prefer to have less of it, walking away from those they disagree with and encouraging others to do the same. In Choosing Civility, P.M. Forni, cofounder of the Johns Hopkins Civility Project, finds it encouraging that roughly 56 percent of Americans seem to believe it “better for people to have good manners” than to “express what they really think” (76) and claims that civility suggests meals are “not the best venue for political debate” (79). On my view, by contrast, people too frequently censor themselves rather than engage in conversation with someone they think wrong about an issue. I think this horribly unfortunate, even if understandable. I think it is understandable because of the way many of us are raised. I think it unfortunate because it leads predictably to a loss of discourse that would promote a more civil society. When people don’t engage in civil discourse with each other, it’s too easy for people to live in ideological bubbles, too likely that people will be unable to even engage with those they disagree with, and too easy for those with power to ignore the wishes of the rest. I want to suggest one cause and possible corrective of this situation.

As children, when we visit extended family or friends, many of us are told not to mention religion or politics, Uncle Bill’s drinking, Aunt Suzie’s time in prison, or any number of other family “secrets” or disagreements. Those subject to these parental restrictions learn not to discuss anything controversial, including serious social issues and our own values. The lesson many seem to take from this is that it is impolite and disrespectful to disagree with others. It is hard for me to think this has not contributed to the polarization and rancor in our society. Because we are trained, from an early age, to censor ourselves and repress conversation about a wide array of topics, it’s not surprising that many are shocked when someone disagrees with them—we are taught not to disagree or even suggest a topic of conversation about which there is likely to be disagreement, so people are naturally surprised when others do precisely that. They think it rude. Given the surprise, moreover, many make no attempt to provide a reasoned response to someone who says something they disagree with or find distasteful. This is a mistake.

The problem may be worse than simple parental limits. As a culture, we seem committed to social separation. Not only do we actively and explicitly discourage children from having honest conversations (which join us with others), but we also seek to set up our lives so that we have more distance from each other—even our immediate family members. People complain about the rising cost of homes, but in real dollars, the cost per square foot of a home has not increased that much (see this). Home costs have increased largely because we insist on larger homes—homes where we have our own bathrooms, our own bedrooms, our own offices. With all of that space, we are away from our loved ones, leaving us able to avoid difficult conversations with even our closest intimates. We don’t have to negotiate for time in the shower, for use of the television, or much of anything else. We don’t have to discuss things we disagree about. (And, of course, Americans tend to think that once a child graduates from high school they ought to move out—again, allowing that those almost-adult children can avoid dealing with their parents, learning how to deal with them when they disagree. And when they “talk,” they now do so by texting—furthering the distance from what would be allowed by face to face, or at least, phone, conversations.) In all, we insist on and get more—more space, more privacy, more isolation. We also sort ourselves—moving to neighborhoods and jobs where others that agree with us live and work. We spend less and less time with people we disagree with And then we are surprised that we don’t know how to deal with such people.

So much for the social criticism. That is, I submit, one of the causes of our current lack of civil discourse (and thus increased polarization). If that is right, the solution should be straightforward: stop taking steps that discourage children from engaging in honest discussion. Make children share a bathroom so that they at least have to negotiate its use with a sibling. Maybe have them share a bedroom too! Really importantly, stop telling children not to discuss certain topics with others. Let them learn from others, let others learn from them. (And obviously, those of us teaching in college should seek to promote discussion of ideologically diverse views, even views that some find offensive.) We need to be offended when young so that we don’t refuse to engage with others we find offensive when we are adults. We would then be prepared for honest civil discourse.

Discourse and Attendance in College Classes

Many of my posts on RCL have been about discourse. None has been directly about discourse in classrooms, but I do try to make my classes sites of civil discourse. This is both because student dialogue is what makes the classroom fun and exciting for me and because I believe it is an essential part of college. (See this.). The discourse that occurs in classrooms and elsewhere on college campuses is an invaluable part of the college experience.

As I’ve discussed previously, I think there are 2 basic reasons to engage in discourse: to maintain or nourish a relationship or to convey information. (See here.) In college classrooms, I will simply assume, the latter reason is paramount. That is also hugely important elsewhere on college campuses—students learn a lot from each other—but the first reason is also hugely important as students make connections with others, some of whom will be life long friends and some of whom will be business associates.

This post is primarily about classrooms, so it’s the conveying of information that is relevant here. In particular, its what is relevant when asking whether attendance should be required in college classes. My own view about this has changed over the years. In the past, I’ve marked people down for poor attendance or multiple tardies or made class participation—for which attendance is a necessary prerequisite—a separate and substantial part of students’ grades. At a certain point, though, colleagues convinced me that making participation a part of a student’s grade was unfair to those students who have significant psychological issues with speaking in class. At first, I responded to that by allowing the “participation” to be outside of class—either in office visits or email. Eventually, I dropped it as a requirement and instead made it only a way to improve one’s grade. I’ve never stopped believing, though, in the importance of attending and participating in class.

Over the years, I’ve had students approach me about taking a class without attending. Some had very good reason they could not attend courses during the day when the course was offered—needing to work full time to support their family, for example. My standard reply was always something like “no, attendance is required” or “you can’t pass this class without attending, so no.” More recently, I have been questioning the wisdom of that. The issue has to involve consideration of the sort of information that is conveyed in classes.

As a philosopher, I am not at all concerned that students learn biographical facts about philosophers and only somewhat concerned that students learn even basic facts about different theories. My main concern is in getting students to see how to do philosophy. What that means is that I want students to learn how to think clearly, check assumptions, make valid inferences, and engage in both verbal and written discourse about arguments and their premises, inferential moves, and conclusions. I want to convey to them how to do this well.

Given what I want the students to get out of my classes, my question becomes “is attendance necessary for students to think clearly, check assumptions, make valid inferences, and engage in both verbal and written discourse about arguments and their premises, inferential moves, and conclusions?” Another way to ask the question is to ask: “do individual learners need professors to learn how to do those things?” I think most do.

Classically, education has three stages: grammar, logic, rhetoric. I prefer to think of these in terms of mimesis, analysis, synthesis. The idea is that young children must memorize information, imitating language and such, and until they have some minimum amount of knowledge, they can’t be expected to do anything else. Once they have that, though, they can move on to the second stage wherein they can use logic to analyze things, figuring out what goes where and why. They can even question—analyze—the bits of information they previously learned. Only with mastery of analysis can they move on to the third stage wherein they can make something new, synthesizing something from the parts of what they have analyzed.

Teachers are clearly needed for mimesis—someone has to provide the student what it is that should be learned (memorized, imitated). Perhaps teachers are also needed for the beginnings of the second stage, pointing students in the right direction as they begin to do logical analysis. One needs to understand basic rules of deductive logic to do analysis well and I suspect most of us need someone to teach us that. But does everyone? Frankly, I doubt it though I suppose how much teachers are needed here will depend on how much of logic is innate to our reasoning abilities. It seems even less likely that teachers are necessary for the third stage, though clearly someone to give us direction can be useful and I think it likely that most of us learn best in dialogue with others. If that is right, attendance in class would clearly be useful. So perhaps that is the answer to my question: most people need direction, they can only get that in class, so attendance should be required.

What, though, if some want to learn without professors? Some certainly can do so. Whether they should be allowed to do so when in college is another question. After all, if they are able to do so, why should they enroll in college at all? If they do enroll, the college can simply say “you are enrolling here and that means accepting that we know best how you will learn (or at least recognizing that we get to decide), and we deem it necessary for you to attend courses.”

Some will no doubt think that the sort of view just attributed to a college is overly paternalistic. On the other hand, some people will be unfortunately wrong when they think they can teach themselves collegiate level material. Some people, after all, read great books and completely misunderstand them. I have met people who thought themselves erudite for reading Hegel, Nietzsche, Marx and others, but whose comprehension of those authors was abysmal. Such people would be well served by a policy requiring course attendance. Without it, they would lack comprehension and thus do poorly on any assessments.

Still, presumably some can read those materials and do well. (In other systems, after all, attending classes is—was?—not expected; one studies a set of materials and then is examined.) Others might not do well, but do well enough for their purposes. They may, that is, only want some knowledge (or some level of skill)—happy to have read the texts even if their comprehension is limited—and be happy to get a C or C- in a course. They may have reason to want a college degree independent of learning well. (In our society, after all, many seem only to go to college to get a degree to signal to employers that they are worth hiring. It’s hard to blame them for this given how our society works.)

So a student may have good reason to enroll in a college, register for a course, and not attend. But what should we think about this and what should professors do? Some professors, of course, may be annoyed or insulted if students are apparently unconcerned to attend regularly or show up on time. I was in the past, but no longer am. I still, though, have a hard time tolerating feigned surprise at grades from students who obviously did not prioritize the class. I would prefer a student who says “its not worth my coming to class; I’ll just try to pass without doing so” to one who lies about how hard they are trying to do the work. Frankly, I am coming to think that if they pass, the former simply deserve congratulations. (If they don’t pass, they can’t expect me to be upset. I can root for their passing, without being surprised if they don’t.) But, honestly, I’d be hugely surprised if they did at all well without attending. That is the main concern—the best pedagogy.

Why would I be surprised if a non-attending student passed? Frankly, I think that the vast majority of people learn better in a class with a professor than they can without. If nothing else, in philosophy and other humanities classes, they learn something very important—how to engage in good civil, honest, and productive discourse. That does affect how they perform on exams and papers. What I expect in all of the writing my students do—whether on a short essay exam, longer essay exams, or papers—is well-written and well thought out, honest and civil responses to whatever prompt is provided. I want them to do philosophy after all, not sophistry or fluff. Attending class means being in an environment designed to help them learn. If they participate as I hope they do, they can also help improve that environment. That makes for better outcomes for all in the class. Even if they don’t participate—and, again, I realize doing so is honestly hard for some students—they are likely to do better simply because they hear the sort of discourse I seek to promote. If they hear others practicing good discourse, they are likely to pick up on what it is. Attendance helps.

The whole point of classes is that for most students, they promote learning—for those attending. Why, then, would someone want to register for a class if they don’t plan to attend? One answer is that the current system mainly doesn’t allow them to get the credentials of college without doing so. Mainly. We do have fully asynchronous online classes for which one does the work on one’s own time so long as one completes it by the required deadlines, including finishing it all by the end of a semester. (But why insist on a time limit?)

While we don’t have a system conducive to students not registering for classes and yet getting credentialed, that isn’t reason to require attendance in the classes we offer. Perhaps we ought to make it possible for students to take a syllabus, learn the material on their own, and sit for an exam when they feel themselves ready, without imposing a schedule on them. If they pass, great. If not, perhaps they try actually taking the class (i.e., including attending). That may be what we should do. Until then, some of us will require attendance and some will not.

Open for comments and discussion. What do others think?

The World is Not a Therapy Session

Braver Angels does fantastic work helping people improve conversations with those they have significant and stress-inducing disagreements with so that they can gain greater mutual understanding of each other, thereby reducing polarization. It seems to work. As I noted earlier, though, the desire to maintain or improve one’s relationships with others is only one of the two main reasons we engage in discourse. The other is to exchange information, both “teaching” and “learning.” As I noted in that previous post, I worry about the “truth deficit” likely to emerge if we stress mutual understanding (of each other rather than of each other’s views). Here, I’ll discuss this a bit further.

What is encouraged in Braver Angels’ workshops is active listening, where one attends to what the other says, providing non-verbal clues of interest, along with reflecting back to the other what they said. In a therapeutic setting, reflecting back to another what they said can be incredibly useful. “People like having their thoughts and feelings reflected back to them” (Tania Israel, page 51) and so increases their comfort level when in therapy, thereby allowing them to open up. For therapeutic purposes, it seems really quite useful. Nonetheless, I have long been uncomfortable with it in other settings.

I had a date once with a woman who, throughout dinner, reflected back to me what I had said. It so threw me off that I didn’t really know what to make of it. I don’t recall how long it took for me to realize that she might have resorted to the tactic because she found what I was saying antithetical to her own views (I don’t recall what we were discussing). I’ll never know for sure as I found it so distasteful that I never saw her again. If the same thing would’ve happened today, I’d probably ask why she was doing it, but I suspect there are others who would do as I did and walk away. (I don’t deny, of course, that others appreciate it.)

Again, the technique has value—there is good evidence that it helps people feel comfortable which can be useful both in developing relationships and in therapy situations (see Israel footnotes 5 and 6 on page 74). Importantly, though, the world is not a therapy session and sometimes what matters is exchanging information, not (or not merely) developing a relationship. Put another way, while it’s true that we sometimes want to develop a relationship and learn about the person, other times we want to figure out the truth about a topic and are less willing to except the truth deficit. If we are trying to persuade someone to change their views about abortion, capitalism, gun control, immigration, schools, welfare rights, or any number of other contentious topics, we might want to know more about our interlocutor, but we also just want to persuade—or be persuaded. (Part of why we want to know who they are is to determine how we might persuade them!)

To be clear, when we are engaging in a serious discussion with someone about an issue we disagree about, we should be open to the possibility that the view we start the conversation with is mistaken and that we can thus learn from our interlocutor. Of course, when we start, we will believe we are right and hope to teach (persuade) the other, but we have to know we can be wrong. We should also be open to the possibility that neither of us is right and there is some third position (perhaps somewhere between our view and theirs, perhaps not) that is better still. What is important in these cases, though, is figuring out the truth about the issue (or getting closer to it). We shouldn’t give that up lightly.

Getting to the truth may, in some instances, be aided by reflecting to each other what we’ve said. Obviously, if we do not understand what our interlocutor has said we should ask them to explain. Sometimes we simply need some clarification. We need to know, after all, that we are actually talking about the same thing and we need to understand where our views overlap and where they do not. Sometimes, also, we might ask someone to repeat what they say in different words to make sure we understand; we might also do it for them (common in teaching). But if reflecting back to each other is used for other reasons (making the other feel comfortable, for example), I wonder how far it goes. It seems to me that we need to challenge each other. Sometimes, we may even need to be abrasive—or to have others be abrasive toward us. This can help us improve our own views. (For more on that see Emily Chamlee-Wright’s article on the topic here, as well as my response. See also Hrishikesh Joshi’s Why Its OK to Speak Your Mind.)

In short, it seems to me that in normal discourse with someone with whom we disagree, we ought to be at least as concerned with determining the best view as we are with making each other comfortable. Making each other comfortable is important, but perhaps primarily as a precursor to honest conversation. If I say, for example, that “I believe we should have completely open economic borders, perhaps just keeping out known criminals” and you reply “let me be sure I understand; you think we should not stop anyone from coming into the country (perhaps unless they are criminals in their own country) even if it means they take our jobs, push for an end to Judeo-Christianity, and bring in drugs,” I am likely to skip over the first part—which strikes me as unnecessary and vaguely insulting—and move on to the latter claims, which I think are all mistakes. I might think “OK, so they wanted to be clear” or “OK, they wanted time to gather their thoughts,” but if it becomes a regular part of the conversation, I am less likely to continue engaging (and, frankly, less likely to trust the other). I may even wonder why why people approach all of life as if it’s a therapy session.

Three News Items to Rally Around

Since I spend a good bit of my time thinking about polarization and ways to combat it, I thought I would bring attention to three recent news items that should help reduce polarization but seem to mostly go unnoticed.

First, there is this from WaPo 10/24/2021, about a police chief in a town in Georgia, seeking to have police officers shoot to incapacitate rather than to kill (so, shooting in the legs or abdomen, for example, instead of the chest).  Of course, it would be best if no one had to be shot at all, but those that (rightly) complain about police violence should be embracing this as an improvement as it would presumably mean fewer killings by police.  And those who worry endlessly about “law and order” would seem to have to choose between that and saying “yeah, we don’t mind it if the police kill people.”  Since the latter would likely be seen as including some nefarious beliefs, it’s hard to imagine why they, too, wouldn’t embrace it.

Second, from NYT 11/3/2021, is a short about a Swiss company literally taking CO2 out of the air and making soda with it. Why everyone isn’t talking about this ecstatically is beyond me. I know folks on the (pretty far) left who worry endlessly about global warming and claim we have to stop this and stop that to at least slow it down before we all die. I know folks on the (pretty far) right who claim, more or less, that global warming is fake news. Either way, this should be good news. If global warming is fake, then this sort of technological advancement may be uninteresting in the long run—but those on the right should be happy to say “OK, we know you’re worried, why don’t you invest in this to help?” If its not fake news (fwiw, it’s not), this may be the way to save us and the planet. Those on the left (assuming they don’t want simply to be victims and keep fighting about “green new deal” sort of regulations) should be embracing the possibilities, declaring “yes, we need more of this as a good way forward without killing the economy and making everyone worse off.”

Finally, from Axios 11/5/2021, is a story on the jobs report.  In a nutshell, “America has now recovered 80% of the jobs lost at the depth of the recession in 2020. … Wages are still rising: Average hourly earnings rose another 11 cents an hour in October, to $30.96. That’s enough to keep up with inflation.”  I know that some question the specific numbers.  That’s no surprise.  What is surprising (even given how bad Dems usually are on messaging) is that Biden and the Dems haven’t been touting this at every chance.  It should please Reps a well except that it may make some swing voters less likely to go to their side.  

The above three stories are pretty clearly good news for everyone.   The third is perhaps better for Dems than Reps, but somehow they haven’t decided to hype it up or use it as a way to convince moderate legislators or voters to help them.  The first and second are good for everyone.  Yet it doesn’t seem like many are talking about any of the three.  It’s almost as if both sides of our political divide want to remain divided.  And to alienate those of us who refuse to take either side.  Or perhaps they want to clearly demonstrate that neither side should be taken seriously and it’s high time for a party to emerge in the middle. 

The “middle” here might be interesting.  What party consistently opposes state coercion and force against civilians?  What party consistently opposes the state looking the other way when negative externalities become worse and worse?  What party consistently favors policies that grow the economy so that all will do better?  There is such a party, even if it has its own problems.

Vaccines, Science, Judgement, & Discourse

My very first entry into this blog—back on July 2, 2020—was about wearing face coverings because of Covid. That was fairly early into the pandemic, but I think the post has aged very well and I still stand by it.  It seems clear that when there are many cases of a serious new infection, people should wear masks if they go into an enclosed space with lots of unknown others. I also think, though, that it would be wrong to have government mandates requiring that people wear masks (except in places, like nursing homes, where the occupants would be at a known and significant risk) and that private businesses should decide the policy for their brick and mortar operations, just as individuals should decide the policy for their homes.  There is nothing inconsistent in any of that.

Similarly, it seems to me that everybody who can, should want to be inoculated against serious infections (having had the actual infection is likely sufficient). Again, that doesn’t mean that it should be government mandated. (I’m so pro-choice, I think people should be able to choose things that are bad and foolish; I don’t think they should be able to choose things that clearly cause harms to others, but neither the vaccine nor its rejection by an individual does that, so far as I can tell.) We shouldn’t need government mandates to encourage us to follow the science.  So let’s discuss that.  

Acetylsalicylic Acid alleviates headaches, fevers, and other pains.  I don’t know how that works.  Here’s a guess: the acid kills the nerves that are firing.  I actually doubt there is any accuracy in that guess at all, but it doesn’t matter.  I don’t need to know how aspirin works.  I know it works and is generally safe so I use it. How do I know this?  It’s been well tested, both by scientists and by tremendous numbers of people throughout the world.

Now, I actually think I have a better sense of how vaccines work than how aspirin works, though I doubt that holds for the new mRNA vaccines and I realize I could be wrong.  Again it doesn’t really matter.  I’ll use them nonetheless—and for the same reason. The fact is that most of the time, most or all of us simply trust in science.  We use elevators, escalators, cars, planes, trains, clothing with new-fangled fabrics, shoes with new-fangled rubber, foods with all sorts of odd new additives, etc.—all of which were developed with science.  And we don’t usually let that bother us.  

What seems to me foolish in standard vaccine refusal is roughly the same as what seems foolish to me in opposition to using the insecticide DEET in areas where mosquitoes carry malaria, which kills many people. It’s true that the DEET causes some significant problems, but it is unlikely that those problems are worse than the many deaths that would result without it.  This seems clear just based on historical use of the chemical. Similarly, vaccines may cause some problems but the (recent) historical use suggests pretty clearly that they save lives.

Of course, there are always mistakes.  Science is constantly evolving—it is more of a process, after all, than a single state of knowledge.  Scientists make mistakes.  Worse, sometimes scientists bend to their desires and sometimes industries have enough financial power to change the way science is presented. (Looking at you, sugar Industry!) Given that and a personal distrust of government, I certainly understand when people want to wait for evidence to settle.

A drug or other scientific advancement used too early may well turn out to be more problematic than its worth.  But aspirin has been well tested.  And vaccines have been well tested.  Even the recent Covid vaccines have been well tested.  The fact is you are far more likely to die from Covid if you are unvaccinated than if you are.  Granted, the odds of dying either way are thankfully slim for most of us.  But what people are now faced with is a free and easy way to avoid (a small chance of) death.  Admittedly, it’s possible that in 20 years we’ll learn that these new vaccines cause cancer or such.  But scientific advancement will continue and the fight against cancer is already far better than it was any time in the past.  So the option is between a free and easy way to avoid a chance of death or serious illness now combined with some chance of added problem later that we may know how to deal with and, well, not avoiding that.  Maybe this is a judgement call, but the former seems pretty clearly the better option in standard cases.  (Other downsides, so far as I can tell, are mostly fictitious.  If you’re worried about a computer chip embedded in the vaccine, for example, realize you could have had one put in you when you were born.)

About it being a judgement call. Consider using a GPS.  Some people just slavishly listen to the directions from their GPS. Unfortunately, this can have pretty bad results.  Other people refuse to use a GPS at all, perhaps thinking they have to do it on their own. For me, the GPS (in my phone) is a tool that is helpful to get where I need to go when I can’t really remember all the directions well or simply don’t trust my ability to do so. Still, I listen to the GPS and sometimes override its directions, for example, if I think it’s going in an unsafe way or a way that’s likely to cause more problems.  Here too, judgment is needed.

Unfortunately, we all seem to think we individually have great judgment even though it’s obvious that not all of us do.  Or perhaps better, none of us do all of the time.  Sometimes one has to recognize that we have to trust others to know better than we do.  

So, what should we do?  We should each try to be honest with ourselves about whether our judgment is likely to be better than those telling us to do other than we would choose. We should listen to people who are actually able to consider all of the relevant evidence.  Because it’s unlikely that any single source of information will always be completely trustworthy, we should likely listen to variety of generally trustworthy sources. 

We need to find people we can rely on—mentors or people recognized as experts in the relevant field—and take their views seriously.  This may simply push the problem back a step: those whose judgment lead them to make bad choices may simply choose to listen to other people with similarly bad judgement.  That is a real problem worth further investigation.  My only suggestion here is to trust those who are leading good lives and who have the trust of their professional peers.  I don’t pretend that is sufficient, but can’t say more here except to note that we can only hope to get better decisions, for ourselves and others, if we have better discussions.  To that end, see this postAlso, realize that if people would in fact standardly make better decisions (in part by having better discussions prior to making decisions), there would be less call for government intervention.  Indeed, if we had better conversations across the board, we would have less people wanting government intervention.  Realizing that those who have suffered through COVID are inoculated, for example, should stop others from trying to pressure them to get vaccinated.


Thanks to Lauren Hall, Connor Kianpour, and JP Messina for suggesting ays to improve this post.

Community, Selfish Miscreants, and Civil Discourse

In my last post, I discussed the paradox of community. Recently, I was reminded of one standard way that paradox is ignored and debates within communities are badly framed.  Its worth considering this as a way not to proceed if one wants to improve civil discourse.

Typically, one of the parties in a dispute about the way the community should move—and this could be newcomers or long time members, though it’s more likely to be the latter simply because they likely have some cohesiveness as a group—is to claim they represent the overall community while the other side is simply selfishly representing themselves.  The dialogue might be explicitly put in terms of those who are selfish and those who are selfless or in terms of those interested only in themselves and those interested in the community as a whole. 

Here is an example: One group might say they are seeking to add a pool to the community (at the expense of all community members) because it would be good for the community as a whole, giving community members a location and activity in which to foster discussion which is good for encouraging community (by strengthening the relationships of community members) while also (of course) providing a form of exercise to keep community members healthy. Advocates of the pool might then say they’ve talked to many of the others in the community who also want the pool and so those who advocate for the pool are really the “we” while those arguing against the pool are selfishly concerned only with their own finances and not with the health of their community members or the community itself. 

The pool issue is thus framed as one between those concerned with “we, the community” and those concerned with “the me”—anyone arguing against the pool is portrayed as being selfishly concerned only with their own interests, unable to suppress their selfishness for the greater good of the “we” that is the whole community. They don’t even understand that as part of the “we,” getting the pool would be good for them! This, of course, is nonsense. (See Isaiah Berlin’s statement about “positive liberty” on pages 22-24 here.)

Consider a different way the issue might have been framed if those opposing the pool started the discussion.  They would insist they have the community’s interests at heart, worried that the added expense will be hard on community members, that some may genuinely fear a pool (perhaps a sibling drowned in in a pool), and that all community members will have additional liability, not merely financial, moving forward.  In short, on their view, the addition of a pool puts a strain on community members, and thereby strains the community.  They then insist that those advocating for a pool are selfish, interested in something only a few swimmers will benefit from, while all share the costs.  

Again, the pool issue is framed as one between those concerned with “we, the community” and those concerned with “the me”—this time, anyone arguing for the pool is portrayed as being selfishly concerned only with their own interests, unable to suppress their selfishness for the greater good of the “we” that is the whole community. They don’t even understand that as part of the “we,” not getting a pool would be good for them!  This, of course, is again nonsense.

In both scenarios—one where pool advocates control the terms of debate and one where anti-pool folks control the terms of the debate—the other side Is said to be selfish, each on that side only concerned with the “I.”  The possibility that they are genuinely concerned with the entire community is disregarded in the normal Orwellian move to use language to one’s advantage regardless of truth. (If it’s old-timers arguing for one side, they might even try to “explain”—Orwell style—that those arguing against it are newcomers who don’t understand the importance of the “we” in this community because they are still embedded in the “me” culture.  They may even believe this.)*

This way of engaging in discourse with others—whether in a small community or a large polity—is misguided at best.  Once again, what we need is open and honest discourse where all realize that disagreement is possible (even likely) and useful and that those we disagree with can be honest and well meaning.  Insistence on labeling those we disagree with “selfish” is a more likely indication that one is a miscreant than being so labeled.


*For my part, I wish people would get over thinking there was something wrong with being concerned with one’s own interests. If people would really concern themselves with their own interests (and that of their own family and friends), they would spend less time bothering others (see this). They might even be more receptive to open and honest dialogue.

The Paradox of Community

Conceptually, community is distinct from neighborhood.  A community can be in a neighborhood, but it might instead consist of widespread people who share some commonality (the community of PPE scholars, for example).  A neighborhood, for its part, may merely be a place people live, not knowing those that also live there. 

Take communities to be groups of people bound together by traditions. Traditions are essential to community. They also vary by community. They might be matters of language, religion, commitment to country, behaviors, holidays, heritage, or any number of other things, some requiring more strict abidance by group norms, some requiring less. Traditions necessarily (but, importantly, not always problematically) hold us back, keep us limited—for the simple reason that people are committed to them. When people are committed to one way of doing things, they are resistant to changes to it. A commitment to car culture, for example, makes it less likely that a group would find (or even look for) an alternative means of transportation. (Or accept such if offered. Think of Segways—why aren’t these available for long distance use? or sealed from rain and cold?)

While traditions hold people back, they also provide a foundation for change.  From the security of being able to interact with others in accepted ways, one can develop new ways to do so—and new ways not to do so.  Because they have traditions, communities make it possible to innovate. Innovation, though, can cause the community to change or even disintegrate. Tradition and innovation are symbiotic even while they simultaneously threaten each other.  Call this the paradox of community (it’s at least a significant tension).

The paradox of community—the fact that a community’s traditions make innovation possible while simultaneously trying to prevent innovation (because innovation could bring the end of the tradition)—makes life in community … interesting.

Another fact about communities is that they either grow or die; stasis is illusory. Communities grow as their members change (some join, some exit, some change themselves), innovate, bring about changes to the traditions (adding some, altering others, ending still others). This is why the paradox is so important.

Some within a community can become so committed to a particular tradition(s) of the community that they work to slow the pace of the community’s growth in order to prevent the altering or ending of their favored tradition(s) or the inclusion of others.  They may do this by trying to encourage newcomers to learn and accept the existing traditions of the community or by actively working to create an environment whereby those seeking change are limited. If they succeed too much—preventing any change in the community’s traditions—they attain stagnation rather than stasis.  This is because absence of change in a community (as for an individual person or any animal) brings the end of the community.  It means no new members–and with no new members, it dies as it’s members die.  Change—innovation—is essential to community.

Of course, new people may attempt to join the community. When they do, they would bring their own histories, cultures, beliefs, and ideals. They could (and perhaps should) learn about the community’s ways of doing things. That is consistent with their bringing their own ways of doings (and their histories, cultures, beliefs, and ideals). It is consistent, that is, with change. But if those within the community seek to limit change, they may try instead to indoctrinate the newcomers into the community’s traditions so that they live as those in the community now live, rather than bringing anything different. Indoctrination thus treats newcomers as having nothing of their own to contribute, as if their histories, cultures, beliefs, and ideals have no place in the community. Newcomers would thus not be allowed to bring their ideas and preferences into the community’s traditions–those traditions would not be allowed to change. Such newcomers are, then, likely to exit the community. (Notice that this does not mean they physically move away or drop their official membership–remember, communities are not the same as neighborhoods (or associations)).

To build community, change must be permitted. This means that all in the community must listen to each other, open to hearing new things that might be incorporated into the web of community activity and the traditions that shape them. This does not mean jettisoning everything previously held dear, but it does mean being open to the possibility of doing so (likely not all at once). Long time members of the community can teach newer members how things were or are done, but that counts no more than what newer members bring to the table. Importantly, those whose ideas are rejected out of hand have no reason to participate in the community. Ignoring this–thinking that all learning here is in one direction–will simply give rise to factions, splintering what was a community, killing it while perhaps giving birth to new, smaller, communities as those factions continue to grow.

So, both tradition and innovation are essential to community. What this means, in part, is that while change is necessary, the pace of change may be too much for some people within a community, at least those committed to one or more of its traditions. Still change can’t be stopped; a successful attempt to stop it, kills the community. The question for those in a community is thus whether their favored tradition(s) and it’s (or their) history are more important than the community itself. To side with a tradition is to side with those no longer present; to side with community is to side with those currently constituting the community—including those who wish to see change.

Of course, those siding with a tradition may take that tradition to have independent value and thus to be worth protecting. They may take this to be a principled defense of preventing change in the community. It is not. The community from which a defended tradition stems, like all communities, must be able to change. (Again, stagnation means death.) Indeed, all surviving communities have what can reasonably be called traditions of change–ways that change takes place. So when defenders of one tradition seek to prevent change, they are pitting one part of the community and its traditions against another and claiming that one of the traditions should be defended at the cost of another—their favored tradition at the cost of the community’s tradition of change. That, though, is just a preference. One cannot just assume that one favored tradition is more valuable than another. After all, those seeking change may rightly claim to be defending a tradition of change within the community.

Putting the last point differently, those seeking change are defending the community as the community currently is and is growing with its current members and their preferences. Those seeking to prevent change, by contrast, are defending only part of the community—some specific tradition(s) they happen to prefer—and, by seeking stagnation, killing the community.

Lest I be thought too critical of defenders of particular traditions, I should note that I do not think there is a good principled reason for either protecting particular traditions or for changing or jettisoning them. In either case, on my view, further considerations are necessary. What we need to determine, on my view, is when interference is justifiably permitted–what principles of interference we ought to accept rather than simply what traditions we happen to prefer. (I discuss some such considerations here and in my 2014.)