Introduction to Social Ethics

The dominant ideas in moral philosophy emerged from the Enlightenment, which was an intellectual movement during the mid-17th to late 18th century that followed the Renaissance and preceded the Industrial Revolution in Western Europe.  The scientists and mathematicians of the Renaissance period were heavily influenced by the careful observations of nature perfected by the artists of the time.  In the case of da Vinci (1452–1519), there seems to be little distinction between art, science, and engineering, all of which could eventually be wrapped up in the single term technology — meaning the study of technique.

It may be that the realism evident in the cultural movement of the Renaissance inspired an empiricism in science, as best exemplified by Bacon (1561–1626).  However, the scientists and mathematicians of the early Enlightenment, such as Descartes (1596–1650) , Newton (1642-1727), and Leibniz (1646–1716), shifted their attention away from art and towards philosophy.  They saw science as a structured approach to discovery of the immutable and generalizable laws that govern Nature, as observed in the interactions between energy and physical objects, including the orbit of planets around the sun and the behavior of light.  It is this tradition of natural philosophy that gives the modern “Doctor of Philosophy” its title, despite the fact that graduates from modern Ph.D. programs in the physical and natural sciences typically never study the modern discipline of Philosophy at all.

Following the intellectual trajectory established by Copernicus (1473-1543) and Galileo (1564-1642), Newton used the natural laboratory of the solar system as the empirical basis for his Philosophiae Naturalis Principia Mathematica, in which he described the laws of motion that governed the orbit of the planets around the sun and (by extension) the interactions between all other ideal bodies.  In the near perfect vacuum of outer space, where objects act upon one another at a distance, Newton could study without the distraction of non-idealities (like friction, or drag) that likely led Aristotle (384-322 B.C.) to believe that heavier objects fall faster than light objects.

Thus, Newton established the model of modern scientific investigation adopted during the Industrial Revolution — investigation of idealized particles by a detached scientific observer, testing mathematical descriptions of observable phenomenon — and established  physical science (i.e., physics) as increasingly independent from metaphysics.  But the Enlightment was also a celebration of the political and intellectual freedom of the individual (“Knowledge is power” – Bacon) as the emphasis shifted from knowledge of God to knowledge of self.  Philosophers like Locke (1632-1704, England), Voltaire (1694-1798, France), Smith (1723-1790, Scotland), and Kant (1724-1804, Prussia)  departed from the empirical Newtonian topics of study and established philosophy as a separate discipline, even though they were deeply influenced by Newtonian successes.  In fact, the Enlightenment was an especially productive period in the establishment of new domains of study.  The Stanford Encyclopedia of Philosophy notes:

The commitment to careful observation and description of phenomena as the starting point of science, and then the success at explaining and accounting for observed phenomena through the method of induction, naturally leads to the development of new sciences for new domains in the Enlightenment. Many of the human and social sciences have their origins in the eighteenth century, in the context of the Enlightenment (e.g., history, anthropology, aesthetics, psychology, economics, even sociology), though most are only formally established as autonomous disciplines in universities later.

Thus, all the sciences of the Enlightenment were either unconsciously or consciously imitative of physics in the sense that they adopted reductionist approaches to seek generalizable principles.  In moral philosophy and the social sciences (including economics), the natural locus of study was the individual as an analogue to the Newtonian particle.

One  of the major philosophical crises of the Enlightenment was the nature of free will.  From the Stanford Encyclopedia again:

Newton’s success early in the Enlightenment of subsuming the phenomena of nature under universal laws of motion, expressed in simple mathematical formulae, encourages the conception of nature as a very complicated machine, whose parts are material and whose motions and properties are fully accounted for by deterministic causal laws.

Philosophers such as Hume (1711-1776) who  sought to “establish the basic laws that govern the elements of the human mind in its operations” (ibid) explicitly adopted Newtonian mechanics as a metaphor for explaining human behavior — a view that proved remarkably persistent.

When real, observable human behavior failed to correspond to the idealized models developed by the philosophers, psychologists, and economists of the resulting Industrial Revolution, more often than not the failure was ascribed to the humans — not the theories.  These behavioral failures could be medical, psychological, cognitive, or moral, but the deviation between normative ideas of what behavior should be and descriptive accounts of real behavior were nevertheless consistently attributed  to some deviance in the individual exhibiting the behavior.

In this view, a moral individual ascribes to a consistent set of ideal principles, regardless of the behavior others.  According to Kant, to do less would be irrational.  While there may still be room to argue about which are the correct moral principles, the Enlightenment approach to moral philosophy holds that there must be laws of moral reason that are generalizable, universally applicable, and discoverable from some foundational principles.

Nevertheless, evidence that people do not behave as rationalistic and individualistic atoms — even in an ideal sense — keeps accumulating.  This series of videos either discusses or re-creates a series of experiments that are now several decades old.

The first is the Asch Conformity Experiment, which shows that individual reason is subject to distortions resulting from interaction with others in a group setting.

The next video illustrates the Bystander Effect, in which the moral actions of an individual seemingly depend on the expectations of a group, rather than a personal sense of right and wrong.

These three videos document the Milgram Experiment, in which subjects conform to the expectations of authority, despite the misgivings of their own conscience.

Lastly, the most infamous of these experiments may be the Stanford Prison Experiment, in which college students randomly assigned to the role of “guard” or “prisoner” adopted extreme persona that aligned with their assigned characters.

Both the Prison Experiment and more recent abuses at Abu Ghraib prison in Iraq call into question the usual explanation — what we might call the “Bad Apple Theory“.   According to this explanation, just one or a few deviant individuals can spoil the behavior of the entire group.

While there certainly may be immoral behaviors that are stronger in some individuals than in others, Phil Zimbardo (the psychologist who ran the Prison Experiment) offers a different explanation.  In his view, it is the system (or the apple “barrel”) that causes the behavior.   That is, the exceptional moral character necessary under such extremely difficult conditions is tantamount to heroism.

The problem with traditional ethics education is that it is predicated on the imperfect premise that individuals can, thru reason, somehow be made immune to failures of moral character.  In Zimbardo’s view, behavior in social settings results from a negotiation between individuals and the institutions in which they are embedded.  As such, it may be that context — the very thing reductionist science seeks to strip away — is the most important determining factor in human behavior after all.

The Question of Intergenerational Equity

Until now, the approach we’ve used to study the interactions between different actors in Tragedy of the Commons or environmental externalities problems has been non-cooperative game theory.  The term “non-cooperative” means that there is no mechanism for enforcement of contracts within the game, so players are not incentivized by penalties or punishment to work collectively — although they may choose to work generously together for moral or religious reasons, or purely out of an enlightened self interest.  For example, in a repeated game, the players might realize that a little cooperation in the early rounds can build trust that pays off in later rounds.

In this video, Richard Dawkins explains a computer programming competition to see which strategy — cooperating or defecting or some combination thereof — would be the most successful in a repeated Prisoners Dilemma game.  Sure enough, the “nice” program Tit for Tat  was the most successful of all strategies.  In fact, nice strategies generally did better than selfish strategies, largely because they can expect to encounter other “nice” strategies willing to consider mutual cooperation.

This view of enlightened self-interest, in which spontaneous cooperation is rewarded, is most common when the parties can:

  1. communicate with each other,
  2. identify strongly with each other, especially in opposition to other groups,
  3. reciprocate with each other, and
  4. feel the consequences of failing to cooperate almost immediately.

In most situations, these conditions could be met in theory but might be difficult to realize in practice.

However,there is one problem in which it is impossible for at least three of these conditions to be met at all: climate change.

Because the effects of climate change are gradual and delayed, the individuals most responsible for the problem are unlikely to feel consequences that happen so far in the future that those most impacted have not yet been born.  Thus, there can be no communication, no reciprocity, only a remote sense of consequences, and identification is contingent upon fuzzy notions of protecting unborn great grandchildren.  In other words, the usual mechanisms for effecting collective action via enlightened self-interest are, in the special case of climate change, inaccessible.

This raises the question of what, if anything, present generations owe to the future.  If current behaviors are so damaging to the well-being of future generations that their quality of life has been severely constrained, then it can certainly be said that future people have been harmed.  Because it is reasonable to expect that future people should have the same opportunities to pursue happiness as we at present, moral sensitivity demands consideration of that harm.  This has been called the problem of Intergenerational Equity.

The Future Ain’t What it Used to Be

The traditional economic approach to problems of inter-temporal equity is to compare the value of future events to those of the present by discounting the future — i.e., counting future benefits less when comparing them to benefits available now.  Thus, the promise of $2 ten years from now might be considered worth giving up only $1 today.

There are several reasons to take this approach, including:

  1. The necessity of compensating promise holders (who accept deferred compensation) for the risk that they might never get paid at all.  To accept present sacrifices, promise holders want a larger payoff in the future.
  2. The necessity of compensating promise holders for the opportunity costs of deferring compensation when, if they had collected earlier, they might have earned interest on the money by lending it to someone else.
  3. The natural preference for people to prefer early payments rather than late.

The implication is that deferring compensation involves near-term sacrifice for long-term gain.  The result can feel like torture, but ultimately leads to success.

Nonetheless, for very long-term problems, even this approach of inter-temporal discounting fails to encompass long-term problems of sustainability, given the reality that sacrifices will be required of people who will never be able to reap the benefits.  Consequently, the question of intergenerational equity can only be understood as a moral problem, and not a problem of enlightened self-interest.  While the present generation can constrain the resources and the alternatives available in the future, the future generation has no recourse, no voice, and no ability to impact the fortunes of the present.  Given the absence of interaction between the generations, there is no way that the problem of intergenerational equity can be resolved by cooperation between generations.

In this video, Chilean economist Manfred Max-Neef argues that self-interest among political leaders works against resolution of long-term problems.  Moreover, he feels powerless to act in concert with others, even within the structure of the World Futures Council, a non-governmental organization found explicitly for the purpose of advancing the interests of future generations (whatever those might be).

Weak vs. Strong Sustainability

There are currently two arguments that attempt to resolve the question of what moral obligation the present generation might have to future generations.  The first of these is called strong sustainability and it holds that resources and management alternatives should be conserved for future generations in kind.  The second is weak sustainability, which is more optimistic about the capacity for future generations to be able to meet their own needs by discovering substitutes for depleted resources.  For example, a strong sustainability argument would prohibit whaling on the scale with which it was pursued in the mid-19th century, nearly driving whales to global extinction.  However, the weak sustainability argument says that it is natural and efficient to use the highest quality, most accessible resources first, and invest a portion of those resources in new discoveries that will provide a high quality of life when the best resources are depleted.  In an economic sense, petroleum from oil drilling provided a substitute for whale oil as a lubricant and in lamps, thereby sparing the remaining whales and enabling greater gains in the quality of life (at least for many humans).

Whereas the strong sustainability argument is easy to implement in that it requires only an understanding of present resources, rather than knowledge of  future discoveries that may or may not come to pass, the strong sustainability view requires much greater sacrifices than the weak in the present day.  The weak sustainability approach is more  tempting, but still demands an answer to the question, “What, specifically, is owed to the future?”

The answer provided by Nobel laureate economist Robert Solow is simple: knowledge.

In particular, Solow argues that the present should be making sacrifices in their level of consumption to fund research that discovers new resources or makes application of remaining resources to human needs more efficient.  In Solow’s view, it is morally satisfactory to be using up all the petroleum so long as we invest in hybrid cars or other technologies that will allow future generations to enjoy the benefits of the fuel, even if they have less fuel to go around.  Thus, the moral tension for Solow is in the difficulty of weighing the needs of the present-day poor, who might benefit from welfare programs, and the future poor, who no doubt need the resources that might have been spent on welfare to be invested in research and development instead.

One interesting aspect of Solow’s argument relates back to externalities.  Because knowledge can never be used up, like petroleum or fish, knowledge is not a rivalrous good.  One person’s use of knowledge does not necessarily keep others from using the same knowledge.  Therefore, research that generates new knowledge typically results in positive externalities, meaning that there are spill-over benefits to parties that did not invest in the research at all.  This is exactly the opposite problem of negative externalities caused by pollution, but also creates a new moral dilemma called the free rider problem, where the best possible strategy for any one individual would be to let others make the sacrifices necessary to perform basic research, but still enjoy the benefits of such research by reaping the positive externalities.  As we have seen before, in a free rider problem, what is optimal at the scale of the individual is tragic at the scale of society as a whole.

Introduction to Game Theory

In the 1980’2 movie Wall St. the character Gordan Gekko (as played by Michael Douglas) proclaims that “Greed, for lack of a better word, is good.”  He claims that what is understood to be an immoral motive at the scale of an individual, is really a virtue at the scale of the organization.

This aphorism, which is understood to be derived from Adam Smith‘s Wealth of Nations, has been repeated so often it has now become the conventional wisdom, if not cliche.  The basic argument is based on the supposition that individuals respond to incentives, and when they are able to reap the rewards of their hard work, they will work harder.

Why is that good?  When individuals works harder for themselves, they produce more, create more wealth (in terms of available goods, services, or manufactured capital), driving market prices down, and enabling greater consumption (or investment).  In this way, “greedy” individuals operating in market systems can benefit all other market participants through lower prices, by acting in their own self-interest (through harder work, or innovation leading to efficiency gains).  Thus, Smith’s approach pushes back on the simpler, Judea-Christian view that greed is a vice.

However, there is a certain class of problems involving group dynamics in which the analysis described by Adam Smith is wrong.

A depicted in the biopic A Beautiful Mind, the economist John Nash discovers that when confronted with problems of competition, such as the management of common resources, actions that might seem rational at the scale of the individual can be irrational at the scale of the group.  The branch of economics that studies problems of this type is called game theory.  The essential characteristic of a game theoretic problem is the realization that any individual’s best decision depends upon what they expect other individuals to do.  Thus, game theory is capable of modeling the interaction between individuals.

The classic example of a game theoretic problem is called the Prisoner’s Dilemma:

The Prisoner’s Dilemma belongs to a class of game theoretic problems called non-cooperative.  In this class of problems, players (decision-makers) each decide independently, without the benefit of a contract or other enforcement mechanism that can hold the other party to an agreement.  The Nash Equilibrium is found where no decision-maker can improve their position unilaterally (i.e., without a change in the decisions of others).

At the Nash Equilibrium, the only way to improve the system is if the decision-makers work collectively — i.e., they have to agree to cooperate.  The difficulty is that, without a punishment mechanism for enforcing the agreement, both decision-makers have an incentive to cheat, despite the fact that in an non-cooperative game-theoretic problem like the Prisoner’s Dilemma, neither player can improve their own position without damaging the position of the other.

Cases of cheating seemingly abound, such as in sports and education.  Not all cases of cheating involve non-cooperative game theoretic problems — although they may involve collusion in covering it up.  However, several studies have documented that people are more likely to cheat when they believe that others are cheating.  In other words, cheating is contagious.

Two recent cases of cheating are noteworthy.  The first is Lance Armstrong, who was recently stripped of his cycling championship as result of the testimony of several other cyclists who claim that he used banned performance-enhancing drugs.  In this case, it is clear that all cyclists would likely be be better off if none cheated because none of them would find it necessary to incur the medical risks of blood doping.   However, if no cyclist was cheating, then the incentives would be strong among those near the top to garner a competitive advantage by being the only cyclist cheating.  Given that all top cyclists are reportedly cheating, then they all incur both medical and career risks, but none gains a competitive advantage.

The case of the Harvard University students in an Introduction to Congress class that are accused of discussing and sharing answers on a take-home exam is more complicated.  The exam itself was open book, open notes, and open Internet.  However, students were explicitly prohibited from discussing the exam.  Nevertheless, several Teaching Fellows (assistants) in the class fielded questions from the students and provided clarification of the exam questions to different degrees.  Moreover, the accused students say that the explicit rules of the exam were at odds with the culture of collaboration that characterized the course.

The accounts of the Harvard case to date seemingly ignore the fact that grading practices in modern Universities often position students in a non-cooperative game theoretic problem.  For example, when letter grades are assigned on the basis of out-performing the average score, then over-performance by one student will necessarily hurt the grades awarded to others students (by raising the average, or busting the curve).   All students might be better off if they all agreed not to study — or at least to not perform their best on the exam.  This would save the students’ effort and result in the same distribution of grades.  However, each student has an individual incentive to study as hard as they can, even knowing that their accomplishments will diminish the grades of other students.

When the cheaters like criminals or oligopolistic firms do not elicit sympathy, it hardly seems like the failure to work collectively is immoral.  In fact, the conventional wisdom is that competition benefits society.  For example, in the United States, special laws have been enacted that prohibit “racketeering” and organized crime, increasingly penalties in cases where criminals are working collectively.

However, in other instances, the failure of individuals belonging to a single bloc to work collectively can result in social costs.  In 1968, Garrett Hardin identified a class of such problems particular to management of “common pool resources“.

The classic economic solution to the problem of the commons is privatization, in which exclusive property rights are allocated to individuals, who consequently have an incentive to manage those resources wisely, thereby aligning individual and social incentives.  However, privatization is not the only mechanism by which common goods have been successfully managed.  Elinor Ostrom points out that cooperation between individuals can exist despite the incentive to cheat and in the absence of a third party (meaning someone outside the group) enforcement.  In these instances, groups typically institute their own mechanisms of enforcement.

Because some common pool resources (such as the atmosphere) are not amenable to privatization, Ostrom’s discovery of alternative mechanisms may be especially important to sustainability.  However, recognition of game-theoretic problems significantly complicates moral analysis.  Because the outcomes of an interesting game-theoretic problem depend on interaction between two or more players, where should the moral culpability for the tragedy reside?

In fact, doing the “right thing” in a non-cooperative game theoretic problem might actually encourage other players to do the wrong thing, by improving their payoffs.  The converse is also true.  Doing the wrong thing (that is, defecting or failing to cooperate), or at least the credible threat of the wrong thing, might actually turn out to be the only way to ensure that other players do the right thing, as this video from a popular British game show illustrates.