Tag Archives: Social Criticism

Why ‘ProSocial Libertarians’?

I am wary of isms and labels. They are used too often by too many as excuses to stop thinking. Worse, no doubt aware of the human tendency to avoid ideas that challenge our preconceptions, unscrupulous advocates on all sides use labels such as ‘socialism’ or ‘far right’ to pillory views with which they disagree, in effect saying ‘These ideas are beyond the pale. You can ignore them.’ This, in turn, further discourages people from venturing outside the safety of their thought bubbles and trying to understand why others might hold different views. 

Although I am quite sensitive to this thought-stultifying use of labels — having taught critical thinking for years — I am sure I am not the only person for whom effectively labeling something as beyond the pale piques one’s curiosity instead of squelching it. (This, by the way, is the main reason — along with my name — that I first read Ayn Rand.) So, fortunately, there are also people who want to be challenged and seek out ideas that put their preconceptions under strain. If you fall into this group, you should enjoy this blog.

Despite the risks that labels bring, we cannot manage without them. To minimize the risks, we should acknowledge that labels are only a starting point for discussion and that the meaning of any politically interesting terms will need to be clarified on an ongoing basis. 

In light of all this, if I had to choose a label that best captures my political orientation, that label would be ‘libertarian’. I found it dismaying, then, during the COVID-19 pandemic, to see the term ‘libertarian’ — as well as related terms like ‘freedom’ — arrogated by a rogue’s gallery of activists and politicians who have been called — with some justification —antisocial. 

What was dismaying was that these so-called ‘libertarians’ were acting out an old, muddleheaded conception of libertarianism that many people could (wrongly) take as reason to dismiss libertarian ideas as unworthy of serious consideration. For, according to this old, muddleheaded conception, libertarians just _are_ antisocial. Like Randy Weaver, libertarians on this conception want nothing more than to be left alone and they will happily head to the woods with their guns and family to achieve this end. Properly understood, however, libertarians need not be Randy Weavers. Or, at least, so I believe. (Please note: In no way do I intend for my use of Randy Weaver as an example of an antisocial libertarian to diminish the tragedy and injustice that befell him and his family at the hands of the United States government.)

Given what the honest use of labels requires, I want to be as clear as I can about what I mean by ‘libertarian’ and why being a libertarian involves being prosocial, not antisocial. But there is no such thing as a conceptual dictator, so any work towards understanding libertarianism will, of necessity, be a joint enterprise. Hence, the idea of this blog: a civil forum for exploring what it means to be a libertarian and the ways being a libertarian involves being prosocial. Hence also, our name: ProSocial Libertarians.

Discourse and Attendance in College Classes

Many of my posts on RCL have been about discourse. None has been directly about discourse in classrooms, but I do try to make my classes sites of civil discourse. This is both because student dialogue is what makes the classroom fun and exciting for me and because I believe it is an essential part of college. (See this.). The discourse that occurs in classrooms and elsewhere on college campuses is an invaluable part of the college experience.

As I’ve discussed previously, I think there are 2 basic reasons to engage in discourse: to maintain or nourish a relationship or to convey information. (See here.) In college classrooms, I will simply assume, the latter reason is paramount. That is also hugely important elsewhere on college campuses—students learn a lot from each other—but the first reason is also hugely important as students make connections with others, some of whom will be life long friends and some of whom will be business associates.

This post is primarily about classrooms, so it’s the conveying of information that is relevant here. In particular, its what is relevant when asking whether attendance should be required in college classes. My own view about this has changed over the years. In the past, I’ve marked people down for poor attendance or multiple tardies or made class participation—for which attendance is a necessary prerequisite—a separate and substantial part of students’ grades. At a certain point, though, colleagues convinced me that making participation a part of a student’s grade was unfair to those students who have significant psychological issues with speaking in class. At first, I responded to that by allowing the “participation” to be outside of class—either in office visits or email. Eventually, I dropped it as a requirement and instead made it only a way to improve one’s grade. I’ve never stopped believing, though, in the importance of attending and participating in class.

Over the years, I’ve had students approach me about taking a class without attending. Some had very good reason they could not attend courses during the day when the course was offered—needing to work full time to support their family, for example. My standard reply was always something like “no, attendance is required” or “you can’t pass this class without attending, so no.” More recently, I have been questioning the wisdom of that. The issue has to involve consideration of the sort of information that is conveyed in classes.

As a philosopher, I am not at all concerned that students learn biographical facts about philosophers and only somewhat concerned that students learn even basic facts about different theories. My main concern is in getting students to see how to do philosophy. What that means is that I want students to learn how to think clearly, check assumptions, make valid inferences, and engage in both verbal and written discourse about arguments and their premises, inferential moves, and conclusions. I want to convey to them how to do this well.

Given what I want the students to get out of my classes, my question becomes “is attendance necessary for students to think clearly, check assumptions, make valid inferences, and engage in both verbal and written discourse about arguments and their premises, inferential moves, and conclusions?” Another way to ask the question is to ask: “do individual learners need professors to learn how to do those things?” I think most do.

Classically, education has three stages: grammar, logic, rhetoric. I prefer to think of these in terms of mimesis, analysis, synthesis. The idea is that young children must memorize information, imitating language and such, and until they have some minimum amount of knowledge, they can’t be expected to do anything else. Once they have that, though, they can move on to the second stage wherein they can use logic to analyze things, figuring out what goes where and why. They can even question—analyze—the bits of information they previously learned. Only with mastery of analysis can they move on to the third stage wherein they can make something new, synthesizing something from the parts of what they have analyzed.

Teachers are clearly needed for mimesis—someone has to provide the student what it is that should be learned (memorized, imitated). Perhaps teachers are also needed for the beginnings of the second stage, pointing students in the right direction as they begin to do logical analysis. One needs to understand basic rules of deductive logic to do analysis well and I suspect most of us need someone to teach us that. But does everyone? Frankly, I doubt it though I suppose how much teachers are needed here will depend on how much of logic is innate to our reasoning abilities. It seems even less likely that teachers are necessary for the third stage, though clearly someone to give us direction can be useful and I think it likely that most of us learn best in dialogue with others. If that is right, attendance in class would clearly be useful. So perhaps that is the answer to my question: most people need direction, they can only get that in class, so attendance should be required.

What, though, if some want to learn without professors? Some certainly can do so. Whether they should be allowed to do so when in college is another question. After all, if they are able to do so, why should they enroll in college at all? If they do enroll, the college can simply say “you are enrolling here and that means accepting that we know best how you will learn (or at least recognizing that we get to decide), and we deem it necessary for you to attend courses.”

Some will no doubt think that the sort of view just attributed to a college is overly paternalistic. On the other hand, some people will be unfortunately wrong when they think they can teach themselves collegiate level material. Some people, after all, read great books and completely misunderstand them. I have met people who thought themselves erudite for reading Hegel, Nietzsche, Marx and others, but whose comprehension of those authors was abysmal. Such people would be well served by a policy requiring course attendance. Without it, they would lack comprehension and thus do poorly on any assessments.

Still, presumably some can read those materials and do well. (In other systems, after all, attending classes is—was?—not expected; one studies a set of materials and then is examined.) Others might not do well, but do well enough for their purposes. They may, that is, only want some knowledge (or some level of skill)—happy to have read the texts even if their comprehension is limited—and be happy to get a C or C- in a course. They may have reason to want a college degree independent of learning well. (In our society, after all, many seem only to go to college to get a degree to signal to employers that they are worth hiring. It’s hard to blame them for this given how our society works.)

So a student may have good reason to enroll in a college, register for a course, and not attend. But what should we think about this and what should professors do? Some professors, of course, may be annoyed or insulted if students are apparently unconcerned to attend regularly or show up on time. I was in the past, but no longer am. I still, though, have a hard time tolerating feigned surprise at grades from students who obviously did not prioritize the class. I would prefer a student who says “its not worth my coming to class; I’ll just try to pass without doing so” to one who lies about how hard they are trying to do the work. Frankly, I am coming to think that if they pass, the former simply deserve congratulations. (If they don’t pass, they can’t expect me to be upset. I can root for their passing, without being surprised if they don’t.) But, honestly, I’d be hugely surprised if they did at all well without attending. That is the main concern—the best pedagogy.

Why would I be surprised if a non-attending student passed? Frankly, I think that the vast majority of people learn better in a class with a professor than they can without. If nothing else, in philosophy and other humanities classes, they learn something very important—how to engage in good civil, honest, and productive discourse. That does affect how they perform on exams and papers. What I expect in all of the writing my students do—whether on a short essay exam, longer essay exams, or papers—is well-written and well thought out, honest and civil responses to whatever prompt is provided. I want them to do philosophy after all, not sophistry or fluff. Attending class means being in an environment designed to help them learn. If they participate as I hope they do, they can also help improve that environment. That makes for better outcomes for all in the class. Even if they don’t participate—and, again, I realize doing so is honestly hard for some students—they are likely to do better simply because they hear the sort of discourse I seek to promote. If they hear others practicing good discourse, they are likely to pick up on what it is. Attendance helps.

The whole point of classes is that for most students, they promote learning—for those attending. Why, then, would someone want to register for a class if they don’t plan to attend? One answer is that the current system mainly doesn’t allow them to get the credentials of college without doing so. Mainly. We do have fully asynchronous online classes for which one does the work on one’s own time so long as one completes it by the required deadlines, including finishing it all by the end of a semester. (But why insist on a time limit?)

While we don’t have a system conducive to students not registering for classes and yet getting credentialed, that isn’t reason to require attendance in the classes we offer. Perhaps we ought to make it possible for students to take a syllabus, learn the material on their own, and sit for an exam when they feel themselves ready, without imposing a schedule on them. If they pass, great. If not, perhaps they try actually taking the class (i.e., including attending). That may be what we should do. Until then, some of us will require attendance and some will not.

Open for comments and discussion. What do others think?

The World is Not a Therapy Session

Braver Angels does fantastic work helping people improve conversations with those they have significant and stress-inducing disagreements with so that they can gain greater mutual understanding of each other, thereby reducing polarization. It seems to work. As I noted earlier, though, the desire to maintain or improve one’s relationships with others is only one of the two main reasons we engage in discourse. The other is to exchange information, both “teaching” and “learning.” As I noted in that previous post, I worry about the “truth deficit” likely to emerge if we stress mutual understanding (of each other rather than of each other’s views). Here, I’ll discuss this a bit further.

What is encouraged in Braver Angels’ workshops is active listening, where one attends to what the other says, providing non-verbal clues of interest, along with reflecting back to the other what they said. In a therapeutic setting, reflecting back to another what they said can be incredibly useful. “People like having their thoughts and feelings reflected back to them” (Tania Israel, page 51) and so increases their comfort level when in therapy, thereby allowing them to open up. For therapeutic purposes, it seems really quite useful. Nonetheless, I have long been uncomfortable with it in other settings.

I had a date once with a woman who, throughout dinner, reflected back to me what I had said. It so threw me off that I didn’t really know what to make of it. I don’t recall how long it took for me to realize that she might have resorted to the tactic because she found what I was saying antithetical to her own views (I don’t recall what we were discussing). I’ll never know for sure as I found it so distasteful that I never saw her again. If the same thing would’ve happened today, I’d probably ask why she was doing it, but I suspect there are others who would do as I did and walk away. (I don’t deny, of course, that others appreciate it.)

Again, the technique has value—there is good evidence that it helps people feel comfortable which can be useful both in developing relationships and in therapy situations (see Israel footnotes 5 and 6 on page 74). Importantly, though, the world is not a therapy session and sometimes what matters is exchanging information, not (or not merely) developing a relationship. Put another way, while it’s true that we sometimes want to develop a relationship and learn about the person, other times we want to figure out the truth about a topic and are less willing to except the truth deficit. If we are trying to persuade someone to change their views about abortion, capitalism, gun control, immigration, schools, welfare rights, or any number of other contentious topics, we might want to know more about our interlocutor, but we also just want to persuade—or be persuaded. (Part of why we want to know who they are is to determine how we might persuade them!)

To be clear, when we are engaging in a serious discussion with someone about an issue we disagree about, we should be open to the possibility that the view we start the conversation with is mistaken and that we can thus learn from our interlocutor. Of course, when we start, we will believe we are right and hope to teach (persuade) the other, but we have to know we can be wrong. We should also be open to the possibility that neither of us is right and there is some third position (perhaps somewhere between our view and theirs, perhaps not) that is better still. What is important in these cases, though, is figuring out the truth about the issue (or getting closer to it). We shouldn’t give that up lightly.

Getting to the truth may, in some instances, be aided by reflecting to each other what we’ve said. Obviously, if we do not understand what our interlocutor has said we should ask them to explain. Sometimes we simply need some clarification. We need to know, after all, that we are actually talking about the same thing and we need to understand where our views overlap and where they do not. Sometimes, also, we might ask someone to repeat what they say in different words to make sure we understand; we might also do it for them (common in teaching). But if reflecting back to each other is used for other reasons (making the other feel comfortable, for example), I wonder how far it goes. It seems to me that we need to challenge each other. Sometimes, we may even need to be abrasive—or to have others be abrasive toward us. This can help us improve our own views. (For more on that see Emily Chamlee-Wright’s article on the topic here, as well as my response. See also Hrishikesh Joshi’s Why Its OK to Speak Your Mind.)

In short, it seems to me that in normal discourse with someone with whom we disagree, we ought to be at least as concerned with determining the best view as we are with making each other comfortable. Making each other comfortable is important, but perhaps primarily as a precursor to honest conversation. If I say, for example, that “I believe we should have completely open economic borders, perhaps just keeping out known criminals” and you reply “let me be sure I understand; you think we should not stop anyone from coming into the country (perhaps unless they are criminals in their own country) even if it means they take our jobs, push for an end to Judeo-Christianity, and bring in drugs,” I am likely to skip over the first part—which strikes me as unnecessary and vaguely insulting—and move on to the latter claims, which I think are all mistakes. I might think “OK, so they wanted to be clear” or “OK, they wanted time to gather their thoughts,” but if it becomes a regular part of the conversation, I am less likely to continue engaging (and, frankly, less likely to trust the other). I may even wonder why why people approach all of life as if it’s a therapy session.

Three News Items to Rally Around

Since I spend a good bit of my time thinking about polarization and ways to combat it, I thought I would bring attention to three recent news items that should help reduce polarization but seem to mostly go unnoticed.

First, there is this from WaPo 10/24/2021, about a police chief in a town in Georgia, seeking to have police officers shoot to incapacitate rather than to kill (so, shooting in the legs or abdomen, for example, instead of the chest).  Of course, it would be best if no one had to be shot at all, but those that (rightly) complain about police violence should be embracing this as an improvement as it would presumably mean fewer killings by police.  And those who worry endlessly about “law and order” would seem to have to choose between that and saying “yeah, we don’t mind it if the police kill people.”  Since the latter would likely be seen as including some nefarious beliefs, it’s hard to imagine why they, too, wouldn’t embrace it.

Second, from NYT 11/3/2021, is a short about a Swiss company literally taking CO2 out of the air and making soda with it. Why everyone isn’t talking about this ecstatically is beyond me. I know folks on the (pretty far) left who worry endlessly about global warming and claim we have to stop this and stop that to at least slow it down before we all die. I know folks on the (pretty far) right who claim, more or less, that global warming is fake news. Either way, this should be good news. If global warming is fake, then this sort of technological advancement may be uninteresting in the long run—but those on the right should be happy to say “OK, we know you’re worried, why don’t you invest in this to help?” If its not fake news (fwiw, it’s not), this may be the way to save us and the planet. Those on the left (assuming they don’t want simply to be victims and keep fighting about “green new deal” sort of regulations) should be embracing the possibilities, declaring “yes, we need more of this as a good way forward without killing the economy and making everyone worse off.”

Finally, from Axios 11/5/2021, is a story on the jobs report.  In a nutshell, “America has now recovered 80% of the jobs lost at the depth of the recession in 2020. … Wages are still rising: Average hourly earnings rose another 11 cents an hour in October, to $30.96. That’s enough to keep up with inflation.”  I know that some question the specific numbers.  That’s no surprise.  What is surprising (even given how bad Dems usually are on messaging) is that Biden and the Dems haven’t been touting this at every chance.  It should please Reps a well except that it may make some swing voters less likely to go to their side.  

The above three stories are pretty clearly good news for everyone.   The third is perhaps better for Dems than Reps, but somehow they haven’t decided to hype it up or use it as a way to convince moderate legislators or voters to help them.  The first and second are good for everyone.  Yet it doesn’t seem like many are talking about any of the three.  It’s almost as if both sides of our political divide want to remain divided.  And to alienate those of us who refuse to take either side.  Or perhaps they want to clearly demonstrate that neither side should be taken seriously and it’s high time for a party to emerge in the middle. 

The “middle” here might be interesting.  What party consistently opposes state coercion and force against civilians?  What party consistently opposes the state looking the other way when negative externalities become worse and worse?  What party consistently favors policies that grow the economy so that all will do better?  There is such a party, even if it has its own problems.

Vaccines, Science, Judgement, & Discourse

My very first entry into this blog—back on July 2, 2020—was about wearing face coverings because of Covid. That was fairly early into the pandemic, but I think the post has aged very well and I still stand by it.  It seems clear that when there are many cases of a serious new infection, people should wear masks if they go into an enclosed space with lots of unknown others. I also think, though, that it would be wrong to have government mandates requiring that people wear masks (except in places, like nursing homes, where the occupants would be at a known and significant risk) and that private businesses should decide the policy for their brick and mortar operations, just as individuals should decide the policy for their homes.  There is nothing inconsistent in any of that.

Similarly, it seems to me that everybody who can, should want to be inoculated against serious infections (having had the actual infection is likely sufficient). Again, that doesn’t mean that it should be government mandated. (I’m so pro-choice, I think people should be able to choose things that are bad and foolish; I don’t think they should be able to choose things that clearly cause harms to others, but neither the vaccine nor its rejection by an individual does that, so far as I can tell.) We shouldn’t need government mandates to encourage us to follow the science.  So let’s discuss that.  

Acetylsalicylic Acid alleviates headaches, fevers, and other pains.  I don’t know how that works.  Here’s a guess: the acid kills the nerves that are firing.  I actually doubt there is any accuracy in that guess at all, but it doesn’t matter.  I don’t need to know how aspirin works.  I know it works and is generally safe so I use it. How do I know this?  It’s been well tested, both by scientists and by tremendous numbers of people throughout the world.

Now, I actually think I have a better sense of how vaccines work than how aspirin works, though I doubt that holds for the new mRNA vaccines and I realize I could be wrong.  Again it doesn’t really matter.  I’ll use them nonetheless—and for the same reason. The fact is that most of the time, most or all of us simply trust in science.  We use elevators, escalators, cars, planes, trains, clothing with new-fangled fabrics, shoes with new-fangled rubber, foods with all sorts of odd new additives, etc.—all of which were developed with science.  And we don’t usually let that bother us.  

What seems to me foolish in standard vaccine refusal is roughly the same as what seems foolish to me in opposition to using the insecticide DEET in areas where mosquitoes carry malaria, which kills many people. It’s true that the DEET causes some significant problems, but it is unlikely that those problems are worse than the many deaths that would result without it.  This seems clear just based on historical use of the chemical. Similarly, vaccines may cause some problems but the (recent) historical use suggests pretty clearly that they save lives.

Of course, there are always mistakes.  Science is constantly evolving—it is more of a process, after all, than a single state of knowledge.  Scientists make mistakes.  Worse, sometimes scientists bend to their desires and sometimes industries have enough financial power to change the way science is presented. (Looking at you, sugar Industry!) Given that and a personal distrust of government, I certainly understand when people want to wait for evidence to settle.

A drug or other scientific advancement used too early may well turn out to be more problematic than its worth.  But aspirin has been well tested.  And vaccines have been well tested.  Even the recent Covid vaccines have been well tested.  The fact is you are far more likely to die from Covid if you are unvaccinated than if you are.  Granted, the odds of dying either way are thankfully slim for most of us.  But what people are now faced with is a free and easy way to avoid (a small chance of) death.  Admittedly, it’s possible that in 20 years we’ll learn that these new vaccines cause cancer or such.  But scientific advancement will continue and the fight against cancer is already far better than it was any time in the past.  So the option is between a free and easy way to avoid a chance of death or serious illness now combined with some chance of added problem later that we may know how to deal with and, well, not avoiding that.  Maybe this is a judgement call, but the former seems pretty clearly the better option in standard cases.  (Other downsides, so far as I can tell, are mostly fictitious.  If you’re worried about a computer chip embedded in the vaccine, for example, realize you could have had one put in you when you were born.)

About it being a judgement call. Consider using a GPS.  Some people just slavishly listen to the directions from their GPS. Unfortunately, this can have pretty bad results.  Other people refuse to use a GPS at all, perhaps thinking they have to do it on their own. For me, the GPS (in my phone) is a tool that is helpful to get where I need to go when I can’t really remember all the directions well or simply don’t trust my ability to do so. Still, I listen to the GPS and sometimes override its directions, for example, if I think it’s going in an unsafe way or a way that’s likely to cause more problems.  Here too, judgment is needed.

Unfortunately, we all seem to think we individually have great judgment even though it’s obvious that not all of us do.  Or perhaps better, none of us do all of the time.  Sometimes one has to recognize that we have to trust others to know better than we do.  

So, what should we do?  We should each try to be honest with ourselves about whether our judgment is likely to be better than those telling us to do other than we would choose. We should listen to people who are actually able to consider all of the relevant evidence.  Because it’s unlikely that any single source of information will always be completely trustworthy, we should likely listen to variety of generally trustworthy sources. 

We need to find people we can rely on—mentors or people recognized as experts in the relevant field—and take their views seriously.  This may simply push the problem back a step: those whose judgment lead them to make bad choices may simply choose to listen to other people with similarly bad judgement.  That is a real problem worth further investigation.  My only suggestion here is to trust those who are leading good lives and who have the trust of their professional peers.  I don’t pretend that is sufficient, but can’t say more here except to note that we can only hope to get better decisions, for ourselves and others, if we have better discussions.  To that end, see this postAlso, realize that if people would in fact standardly make better decisions (in part by having better discussions prior to making decisions), there would be less call for government intervention.  Indeed, if we had better conversations across the board, we would have less people wanting government intervention.  Realizing that those who have suffered through COVID are inoculated, for example, should stop others from trying to pressure them to get vaccinated.


Thanks to Lauren Hall, Connor Kianpour, and JP Messina for suggesting ays to improve this post.

Community, Selfish Miscreants, and Civil Discourse

In my last post, I discussed the paradox of community. Recently, I was reminded of one standard way that paradox is ignored and debates within communities are badly framed.  Its worth considering this as a way not to proceed if one wants to improve civil discourse.

Typically, one of the parties in a dispute about the way the community should move—and this could be newcomers or long time members, though it’s more likely to be the latter simply because they likely have some cohesiveness as a group—is to claim they represent the overall community while the other side is simply selfishly representing themselves.  The dialogue might be explicitly put in terms of those who are selfish and those who are selfless or in terms of those interested only in themselves and those interested in the community as a whole. 

Here is an example: One group might say they are seeking to add a pool to the community (at the expense of all community members) because it would be good for the community as a whole, giving community members a location and activity in which to foster discussion which is good for encouraging community (by strengthening the relationships of community members) while also (of course) providing a form of exercise to keep community members healthy. Advocates of the pool might then say they’ve talked to many of the others in the community who also want the pool and so those who advocate for the pool are really the “we” while those arguing against the pool are selfishly concerned only with their own finances and not with the health of their community members or the community itself. 

The pool issue is thus framed as one between those concerned with “we, the community” and those concerned with “the me”—anyone arguing against the pool is portrayed as being selfishly concerned only with their own interests, unable to suppress their selfishness for the greater good of the “we” that is the whole community. They don’t even understand that as part of the “we,” getting the pool would be good for them! This, of course, is nonsense. (See Isaiah Berlin’s statement about “positive liberty” on pages 22-24 here.)

Consider a different way the issue might have been framed if those opposing the pool started the discussion.  They would insist they have the community’s interests at heart, worried that the added expense will be hard on community members, that some may genuinely fear a pool (perhaps a sibling drowned in in a pool), and that all community members will have additional liability, not merely financial, moving forward.  In short, on their view, the addition of a pool puts a strain on community members, and thereby strains the community.  They then insist that those advocating for a pool are selfish, interested in something only a few swimmers will benefit from, while all share the costs.  

Again, the pool issue is framed as one between those concerned with “we, the community” and those concerned with “the me”—this time, anyone arguing for the pool is portrayed as being selfishly concerned only with their own interests, unable to suppress their selfishness for the greater good of the “we” that is the whole community. They don’t even understand that as part of the “we,” not getting a pool would be good for them!  This, of course, is again nonsense.

In both scenarios—one where pool advocates control the terms of debate and one where anti-pool folks control the terms of the debate—the other side Is said to be selfish, each on that side only concerned with the “I.”  The possibility that they are genuinely concerned with the entire community is disregarded in the normal Orwellian move to use language to one’s advantage regardless of truth. (If it’s old-timers arguing for one side, they might even try to “explain”—Orwell style—that those arguing against it are newcomers who don’t understand the importance of the “we” in this community because they are still embedded in the “me” culture.  They may even believe this.)*

This way of engaging in discourse with others—whether in a small community or a large polity—is misguided at best.  Once again, what we need is open and honest discourse where all realize that disagreement is possible (even likely) and useful and that those we disagree with can be honest and well meaning.  Insistence on labeling those we disagree with “selfish” is a more likely indication that one is a miscreant than being so labeled.


*For my part, I wish people would get over thinking there was something wrong with being concerned with one’s own interests. If people would really concern themselves with their own interests (and that of their own family and friends), they would spend less time bothering others (see this). They might even be more receptive to open and honest dialogue.

The Paradox of Community

Conceptually, community is distinct from neighborhood.  A community can be in a neighborhood, but it might instead consist of widespread people who share some commonality (the community of PPE scholars, for example).  A neighborhood, for its part, may merely be a place people live, not knowing those that also live there. 

Take communities to be groups of people bound together by traditions. Traditions are essential to community. They also vary by community. They might be matters of language, religion, commitment to country, behaviors, holidays, heritage, or any number of other things, some requiring more strict abidance by group norms, some requiring less. Traditions necessarily (but, importantly, not always problematically) hold us back, keep us limited—for the simple reason that people are committed to them. When people are committed to one way of doing things, they are resistant to changes to it. A commitment to car culture, for example, makes it less likely that a group would find (or even look for) an alternative means of transportation. (Or accept such if offered. Think of Segways—why aren’t these available for long distance use? or sealed from rain and cold?)

While traditions hold people back, they also provide a foundation for change.  From the security of being able to interact with others in accepted ways, one can develop new ways to do so—and new ways not to do so.  Because they have traditions, communities make it possible to innovate. Innovation, though, can cause the community to change or even disintegrate. Tradition and innovation are symbiotic even while they simultaneously threaten each other.  Call this the paradox of community (it’s at least a significant tension).

The paradox of community—the fact that a community’s traditions make innovation possible while simultaneously trying to prevent innovation (because innovation could bring the end of the tradition)—makes life in community … interesting.

Another fact about communities is that they either grow or die; stasis is illusory. Communities grow as their members change (some join, some exit, some change themselves), innovate, bring about changes to the traditions (adding some, altering others, ending still others). This is why the paradox is so important.

Some within a community can become so committed to a particular tradition(s) of the community that they work to slow the pace of the community’s growth in order to prevent the altering or ending of their favored tradition(s) or the inclusion of others.  They may do this by trying to encourage newcomers to learn and accept the existing traditions of the community or by actively working to create an environment whereby those seeking change are limited. If they succeed too much—preventing any change in the community’s traditions—they attain stagnation rather than stasis.  This is because absence of change in a community (as for an individual person or any animal) brings the end of the community.  It means no new members–and with no new members, it dies as it’s members die.  Change—innovation—is essential to community.

Of course, new people may attempt to join the community. When they do, they would bring their own histories, cultures, beliefs, and ideals. They could (and perhaps should) learn about the community’s ways of doing things. That is consistent with their bringing their own ways of doings (and their histories, cultures, beliefs, and ideals). It is consistent, that is, with change. But if those within the community seek to limit change, they may try instead to indoctrinate the newcomers into the community’s traditions so that they live as those in the community now live, rather than bringing anything different. Indoctrination thus treats newcomers as having nothing of their own to contribute, as if their histories, cultures, beliefs, and ideals have no place in the community. Newcomers would thus not be allowed to bring their ideas and preferences into the community’s traditions–those traditions would not be allowed to change. Such newcomers are, then, likely to exit the community. (Notice that this does not mean they physically move away or drop their official membership–remember, communities are not the same as neighborhoods (or associations)).

To build community, change must be permitted. This means that all in the community must listen to each other, open to hearing new things that might be incorporated into the web of community activity and the traditions that shape them. This does not mean jettisoning everything previously held dear, but it does mean being open to the possibility of doing so (likely not all at once). Long time members of the community can teach newer members how things were or are done, but that counts no more than what newer members bring to the table. Importantly, those whose ideas are rejected out of hand have no reason to participate in the community. Ignoring this–thinking that all learning here is in one direction–will simply give rise to factions, splintering what was a community, killing it while perhaps giving birth to new, smaller, communities as those factions continue to grow.

So, both tradition and innovation are essential to community. What this means, in part, is that while change is necessary, the pace of change may be too much for some people within a community, at least those committed to one or more of its traditions. Still change can’t be stopped; a successful attempt to stop it, kills the community. The question for those in a community is thus whether their favored tradition(s) and it’s (or their) history are more important than the community itself. To side with a tradition is to side with those no longer present; to side with community is to side with those currently constituting the community—including those who wish to see change.

Of course, those siding with a tradition may take that tradition to have independent value and thus to be worth protecting. They may take this to be a principled defense of preventing change in the community. It is not. The community from which a defended tradition stems, like all communities, must be able to change. (Again, stagnation means death.) Indeed, all surviving communities have what can reasonably be called traditions of change–ways that change takes place. So when defenders of one tradition seek to prevent change, they are pitting one part of the community and its traditions against another and claiming that one of the traditions should be defended at the cost of another—their favored tradition at the cost of the community’s tradition of change. That, though, is just a preference. One cannot just assume that one favored tradition is more valuable than another. After all, those seeking change may rightly claim to be defending a tradition of change within the community.

Putting the last point differently, those seeking change are defending the community as the community currently is and is growing with its current members and their preferences. Those seeking to prevent change, by contrast, are defending only part of the community—some specific tradition(s) they happen to prefer—and, by seeking stagnation, killing the community.

Lest I be thought too critical of defenders of particular traditions, I should note that I do not think there is a good principled reason for either protecting particular traditions or for changing or jettisoning them. In either case, on my view, further considerations are necessary. What we need to determine, on my view, is when interference is justifiably permitted–what principles of interference we ought to accept rather than simply what traditions we happen to prefer. (I discuss some such considerations here and in my 2014.)

Moralism, Nationalism, and Identity Politics

In a previous post, I began discussing moralism, which I take to be a commitment to the view that some acts must be forbidden, socially or legally, because they are (a) judged wrong by the general populace, (b) in some way opposed to the continued survival of the community qua a somehow unified group (I had said “general populace,” but this is clearer), or (c) simply immoral even if no one is hurt by them.

I have been seeing, once again, posts on social media about the loss of national identity (and praise for a few places that seem to still have such). My response to such posts is always the same: why would anyone value a national identity? That is the same response I have to those who seem to identify with a political party, ideology, racial or cultural groups, groups with the same sexual preferences, etc. I always wonder why anyone thinks that a group has any independent substantive value rather than just being a set of people that happen to share something in common.

Identifying with a group could just be recognizing that one has something in common with others (those also in the group), but it—or “taking a group identity”—has become something more. It is, we might say, an entryway into valuing that group for its own sake—that is, it’s the starting point to thinking of the group as having some value above and beyond the value of the individuals in the group. Nationalism is no exception—the whole point (it seems to me) is to encourage people to think of the nation as an entity of moral value all its own. Granted, that value is meant to be somehow good for the people within the nation, but how that works is mysterious. (But not to the point here.)

What do those who bemoan a loss of national identity (or who seek to revive such) want? They want to convince others to live as they think all ought to live. Or at least how all who live here ought to live. This looks like the first sort of moralism—they believe those who act differently are somehow acting wrongly. But what is it that they do wrong? So far as I can tell, it is nothing more than the refusal to live as the advocates of nationalism want.

Why are advocates of nationalism so concerned about people acting differently? This is where the second sort of moralism comes back in: what nationalists want is to be assured their group will survive; they thus fear anyone not going along with them as it means their national group does not have the allegiance of everyone and is thus threatened. It is the survival of the group that matters, after all, not the survival of the individuals within the group.

To be clear, so far as I can tell, nationalism is no different from any other form of political identity. Each group wants all of its members to “fall in” and be what the group is self-portrayed as. Those who act differently or in any way challenge the supposed identity of the whole are a problem to be dealt with, perhaps excised from the group, excommunicated, shunned, cancelled, or deported; perhaps (the topic for a future post) jailed or killed. Here I note only that I prefer the liberal ideal: I like that we live in a society with people who have different backgrounds, beliefs, religions, heritages, skill sets, etc. 1000 flowers blooming is far more attractive than 1000 clones.

Schools, Teachers, Parents, and a Bad Assumption

In my last post, I discussed the problems surrounding opening schools and, importantly, how we discuss them. In this post, I want to raise an issue about schooling more generally that is rarely discussed at all. I want to show how our current system encourages a false belief about parents and teachers that has pernicious results.

I begin by noting that my wife is a public school teacher and, given how Georgia is handling the pandemic, I have a clear preference for her to not teach in her school building. I also have a school age child who was, until a month ago, in a private school. The administration of that school is, I think, approaching the situation far better than most, but we still worry about both health and pedagogical risks. Thinking about both returning (or not) to school has me once again wondering about fundamental social problems—especially regarding schooling and parenting.

I think most of us are pretty bad at parenting. (Philip Larkin understood this well, but I should be clear that I think there are a huge variety of ways that we are bad at it—some are overbearing and some are entirely too loose, for opposing examples.) Worries about increased child abuse with school closures are therefore not at all surprising. On the other hand, I also think most K-12 schools are pretty bad at educating. Having served on committees for two charter schools and volunteered and watched at my son’s schools, I’ve been amazed at how unwilling school administrators can be to make use of evidence about best educational practices. (This is sometimes true even when they clearly know the evidence—in such cases, they tend to point out that they are constrained by budgets, politics, etc.) Schools don’t, in my view, offer enough music or art or time to relax, run, and breath outside. They also tend to start too early in the morning, foolishly insist children sit still and at desks, force students to maintain logs of reading, and even penalize students that read unassigned books at the wrong time. Worries about children being stifled and losing their innate curiosity because of school rules are therefore also not surprising.

Many parents are aware of problems with their children’s’ schools. Some even work to correct them. Most, though, seem to “mind their own business”—as if the education of their children were not their business. Indeed, many parents seem to think that because schools are provided and mandatory, they are themselves absolved of the responsibility for educating their children. (As schools feed and medically nurse children, parents may feel absolved of even more responsibility.) Even the best of parents tend to assume their children are being well taken care of at school. Unfortunately, too many parents assume their children are the school’s responsibility during the day. Interestingly, the pandemic helped some see that their school was not working for them. (See this interesting NY Times piece.)

I do not think any of this is surprising or unexplainable. We live in a society wherein government has encouraged parental abdication of educational responsibilities. Parents often rightly feel that they cannot opt out of government run schools. Where they can, they usually are constrained to choose either the local government school or a nearby private school. Only in some locales is there a simple and straightforward process through which you can legally educate your own child. (The option is, I think, available everywhere in the US, but with more or less red tape involved.) Encouraged is a belief that I suspect drives the problems that beset schools: that parenting and teaching are necessarily distinct and must be kept separate.

Our system of K-12 education relies on the idea that parents are not teachers. Indeed, some homeschooling parents have been condemned for thinking they could teach their own children. Parents, on this view, are supposed to feed, clothe, love, and maybe socialize children. Schools, on the other hand, provide teachers to educate children, too often including moral education (and might also provide food and healthcare for the children). And schools—or the administrators thereof (or, worse, politicians)—decide where a child will learn and how. A parent that tries to send her child to a better public school than the one closest may face jail time—because the school system decides, not the parent. (See this and this.) Parents, after all, don’t know about education.

Two problems emerge when people believe parenting and teaching are necessarily distinct. The first, I’ve discussed above: schools operate with a variety of problems and parents don’t work to change them or do so but face insurmountable difficulties in the attempt. When they don’t try, it is likely at least partly because thinking that parents aren’t teachers makes parents think teachers have an authority they do not. And, of course, they assume teachers run schools. The second problem is a corollary: because parents are led to believe schools and teachers have an authority they do not themselves possess, parents don’t think they need be active participants in their children’s education. In short, parents take less responsibility for raising their children, leaving more and more to schools. What society gets, too often, is school graduates who learn to do as they are told, conforming to societal requirements. If parents were more active, we’d get more diversity in how children are educated, resulting in many benefits (though admittedly also costs in terms of equity). I think we are seeing some of this already and hope to see more. We’d get more people contributing in more and more varied ways to society, creating more and more varied benefits for all.

In short, the all too common belief that parents and teachers are necessarily distinct lets parents off the hook for too much and grants schools too much leeway. Challenging that belief would encourage parents to challenge their children’s schools, thereby either improving the schools or having the schools lose students to other alternatives.

The pandemic has forced us to re-evaluate many things. Hopefully, one positive outcome will be a healthier view of the relationship between parenting and education–one that emphasizes parental responsibility and acknowledges the limits of career educators (especially those in what might be called “educational factories”). One might even hope that this would help make parents better at parenting.

(Conversations with my wife and with Lauren Hall, JP Messina, and Kevin Currie-Knight inspired, and helped me with, this post.)