Clinical ethics and materialism

In a variant of the hoary old ‘ungrounded morality’ question, Barry Arrington has a post up at Uncommon Descent which ponders how a ‘materialist’ could in all conscience take a position as clinical ethicist, if he does not believe that there is an ultimate ‘right’ or ‘wrong’ answer. I think this betrays a fundamental misunderstanding of clinical ethics. In contrast to daily usage, ethics here is not a synonym for morality.

I can understand how a theist who believes in the objective reality of ethical norms could apply for such a position in good faith. By definition he believes certain actions are really wrong and other actions are really right, and therefore he often has something meaningful to say.

My question is how could a materialist apply for such a position in good faith? After all, for the materialist there is really no satisfactory answer to Arthur Leff’s “grand sez who” question that we have discussed on these pages before. See here for Philip Johnson’s informative take on the issue.

After all, when pushed to the wall to ground his ethical opinions in anything other than his personal opinion, the materialist ethicist has nothing to say. Why should I pay someone $68,584 to say there is no real ultimate ethical difference between one moral response and another because they must both lead ultimately to the same place – nothingness.

I am not being facetious here. I really do want to know why someone would pay someone to give them the “right answer” when that person asserts that the word “right” is ultimately meaningless.

(The last question is an odd one. You would pay someone to give you the “right answer” so long as they believe that there is such a thing?)

Of course you don’t have to go far into medical ethics before you get to genuine ethical thickets. The interests of a mother versus those of the foetus she carries; the unfortunate fact that there aren’t the resources to give every treatment to everyone; the thorny issues of voluntary euthanasia or ‘do not resuscitate’ decisions; issues raised by fertility treatments; cases such as the recent removal from hospital of Aysha King; the role of a patient’s own beliefs. There aren’t many right answers, when you get beyond the obvious things that you don’t need to pay someone to set guidelines for.

It is a bizarre argument to regard moral relativism as a bar to this job. A moral absolutist may believe that blood transfusion is wrong, that faith in the lord is the way to get better, that embryos should never be formed outside a uterus, or some other such faith-based notion. And they have to persuade others of different, or no, faith that this decision is indeed what objective morality dictates, and whatever their own views on morality they must accept that. So I don’t agree that the ‘grounding’ of an atheist’s personal moral principles has any bearing on their candidacy.

441 thoughts on “Clinical ethics and materialism

  1. walto,

    And even In the case of food, I think there are objectively bad hamburgers.

    And presumably, a maggot-infested burger would qualify as objectively bad. Yet a vulture might find it to be exquisitely delicious. Is that maggoty burger objectively good, objectively bad, or neither?

    I pick ‘neither’.

  2. keiths: In essence, you are arguing that moral norms are objectively good to the extent that they promote ‘successful, enduring societies’.

    Oddly enough, that was B.F. Skinner’s proposed norm.

    Only he phrased in “Darwinian” terms. Societies that do not inculcate norms conducive to their survival do not survive. That would be tautological if it were not possible to identify such norms. He thought it was possible.

  3. walto,

    When you say there are values, just not objective values, it is akin to saying that X is true for me even though not-X is true for you, which I take to be a confusing way of saying “I believe X and you don’t believe X.” Personalizing properties of this kind just makes a mess.

    Better a messy truth than a neat and tidy falsehood.

    We can only act on what we think is right or wrong. Lacking the “cross-checks” I discussed earlier in the thread, we have no way of deciding whether what we think is right or wrong is objectively right or wrong. Why not acknowledge that and incorporate it into our thinking?

  4. In essence, you are arguing that moral norms are objectively good

    Keith:
    I’m not arguing about “goodness”, I am arguing about the “objectivity” per se of norms.

    to the extent that they promote ‘successful, enduring societies’. But in so doing, you are unwittingly assuming that successful, enduring societies are an objective moral good.

    I think applying “objectively morally good”” to “successful, enduring societies” would be a category error when using the argument structure I am trying to present.

    It would be a category error for the same reason that using “successful technology” to judge the success of a scientific theory does not mean that “lets us build successful technology” is a scientific theory.

    I am applying “objectively” to the process which moral norms are formed using metanorms in naturalistic ethics because it is analogous to the process by which theories are formed using norms in science (BTW, notice “norms” appears at different levels in this analogy).

    It you want to propose that pragmatism is not a good way to judge the objectivity of the metanorms used to create the norms in moral frameworks, then that is possible; I am assuming pragmatic justification is a reasonable way. I am assuming that because pragmatic justification is the same approach we use to judge scientific norms used to create theories

    On the other hand, if you are OK with pragmatic justification but think there are other ways of looking at how well metanorms function to produce objective moral frameworks, then I am all ears.

    ETA: I agree you do need norms to compare the functionality measures used in that pragmatic justification, but they are not the same type of norms that appear in moral frameworks. And I also realize I have not defined “successful” in “successful, enduring societies” and I better make sure I can do that non-circularly if I want it to add something to “enduring”.

    I really should stop because I don’t really think we are going to make progress towards a common understanding based on our exchanges so far on this topic, but it’s a cloudy day, so what the hell.

  5. walto:
    Just wrote a paper on this, Bruce.If it gets accepted, I’ll provide a link.

    Great.

    If the link would be to a closed access journal, can I ask that you upload a copy or a draft (if needed for copyright of the journal) to your academia.edu site, too, if possible.

    I promise not to bring up quantum mechanics in any comments I may have.

    Or at least this version of me emerging from the universe’s total quantum state will not do so.

  6. Bruce,

    Perhaps we can agree on this: Once you’ve selected a goal for your moral system, along with a suitable metric for evaluating how well a candidate system achieves that goal, then you can objectively compare moral systems. System A may objectively achieve goal G (as measured by metric M) better than system B does. In other words, system A is objectively better at achieving goal G than system B is, though not objectively better in an absolute sense.

    The choice of goal G, however, is inherently subjective. There is no objectively “right” goal. One person may see human flourishing as the goal. Another may target fairness. A third may settle on “conformance to Sharia law”.

    Once a goal is selected and an adequate metric is defined, comparisons can proceed in an objective (or ‘pragmatic’, if you prefer) fashion. However, the goal selection itself cannot be objective, as far as I can see.

  7. walto,

    The ability to confirm something–in a manner some like–does not make the property come into being.

    True, and I am not saying that objective morality is logically impossible. I am saying that we aren’t justified in claiming that it exists, and even less justified in claiming that it is reflected in our moral intuitions, for reasons that I have already elucidated.

    Values are not facts.

    Objective values, if they exist at all, do correspond to facts. If X is objectively better than Y, then it is a fact that X is objectively better than Y.

    Even subjective values correspond to facts. If I think X is better than Y, then it is a fact that I find X to be better than Y. It’s just that my opinion doesn’t necessarily reflect an objective superiority of X over Y.

    If you take only facts to exist, then there are no values, and you take only what science can confirm to exist, and so you get your conclusion that there are no values. It’s a positivist view I disagree with.

    I do think that values exist, because I accept the standard meaning of the term, which allows for subjectivity. For example, Google returns this definition of ‘value’ as a noun:

    2 a person’s principles or standards of behavior; one’s judgment of what is important in life.

    “they internalize their parents’ rules and values”

    synonyms: principles, ethics, moral code, morals, standards, code of behavior

    “society’s values are passed on to us as children”

    In a sentence like “they internalize their parents’ rules and values”, the word ‘value’ is not being restricted to refer to objective values, as it would be according to your idiosyncratic usage. “He instilled the value of strict obedience in his son” is not a nonsensical statement, even if you happen to disagree that strict obedience is objectively valuable.

    It fits the definition:

    2 a person’s principles or standards of behavior; one’s judgment of what is important in life.

  8. keiths: We can only act on what we think is right or wrong.

    Exactly. And if we’re right in what we think, the action IS right, if not, it isn’t. Just like with facts. Sometimes we’re right, based on what seems to be the case to us, sometimes not. You want a decision procedure, and there isn’t one. There are, IMO, just means for determining more or less likely to be legitimate (still seems right after a week of thinking about it; most others agree; doesn’t seem to be culturally dependent; etc.) How do I know those tests are more likely to get me to legitimate judgments? It’s just part of understanding what “good” and “right” mean and how our emotions work. It’s the same with our perceptions–we learn to look from different angles, in better light, ask others, etc.

    The difference is that, with values, there are no decisive empirical tests. That’s part of what makes them values.

    I note too that just as we may be uncertain whether it is best for those on Planet Claire to foster or exterminate humans, we may also be uncertain whether there is a planet with humans on it that we will never find. Inability to confirm does not equal meaningless or false. Dependency on verification for meaning and/or truth is a relic of positivism that I believe has rightly been largely dropped.

    It is fun to think about what vultures and those with no musical training may enjoy, but sometimes it is important to make moral judgments too (less important with aesthetic judgments). E.g., Is it the right thing to do to engage in bombings with drones or suggest a post is anti-semitic when you know it isn’t? Note that I am not asking you what you prefer–I may prefer A to B even though I believe B would be wrong. When one is asked these questions what is wanted is whether you believe it’s right (whether you may be mistaken about this or not). It’s ok if you don’t know for sure. It’s even ok if you don’t think there are objective values–that’s also a different (though related) question–because you may be right about what is wrong and wrong about whether there are objective values.

    The point is, when you’re asked what you think is right or good that’s simply not the same thing as being asked what you like or what your culture or country likes, because, again, we may not be on board with what we or our country likes. People who argue about these matters aren’t arguing about what other people prefer–that’s generally obvious. They’re arguing about whether these items should be preferred.

    I mean, it’s possible you’re right–I certainly don’t deny that, but there are consequences. On your view, nearly everybody in the world is wrong to engage in arguments about what’s right or wrong, better or worse, good or bad, although we all do so nearly every day of our lives and many of these arguments seem critically important–much more important, than, e.g., whether there are objective values.

    Think about it: Everybody would have to be wrong every day of their lives, because you are right that there are no objective values.

  9. keiths:

    Perhaps we can agree on this:Once you’ve selected a goal for your moral system, […]

    Keith:
    There is a lot I agree with there. I would make a point of adding that science is in the same boat. There are no “objectively right” ways to judge science theories, just better ways according to some functional goal.

    The choice of goal G, however, is inherently subjective.There is no objectively “right” goal.

    I don’t agree with “inherently subjective”, if that means personal opinion. I think there are rational and hence non-subjective arguments for selecting certain types of measures.

    There are some rationalluy-justifiable things we can look at assuming we can agree on a basic definition of what morality is for. If science is for predicting and controlling interactions with nature (*), then I think a good start is saying morality is for building the rules for human societies (the rules for interactions between people and for how a person treats him or herself).

    Now what could be measurements could be rationally argued for?

    Longevity and competitiveness with other societies might be two. What would be the point of building rules for something that you don’t think can fulfill its function well enough last as long ?

    ETA: Whether people free to move choose to leave or enter the society could be another.

    Consistency with the anthropological history of humanity might be another. Kitcher argues that the scientific history of primate society indicates societies which increased altruism seemed to be the evolutionary path that humanity was part of, that this fact led to the start of the “moral project” (ie creation of moral frameworks) and that therefore “increasing altruism” should a measure of the functional success of a framework.

    I’d also argue that effectiveness in delivering that moral framework’s conception of “human flourishing” is another measure of the success of the process of deriving it.
    I do think that “human flourishing” is a general name for something different moral frameworks may define differently (see my post to KN), so I see a need to include this.

    ————————————-
    (*) I’m deliberately sticking to a instrumentalist approach to science, as the minimal approach, to avoid issues with scientific realism which I noted earlier that I don’t see as relevant.

  10. Kantian Naturalist:
    (1) “ok, the promotion of human flourishing seems like a good criterion for resolving conflicts between ethical frameworks,

    I’d make this objection: “Human flourishing” is a general term that different moral frameworks may choose to define differently. Hence by itself it cannot be used as a way to judge the pragmatic success of a framework.

    For example, the definition of “human” will be affected by how a framework wants to treat the moral status of fetuses and of “people” in a permanent vegetative state.

    Also, different frameworks may emphasize different aspects of flourishing: some emphasizing a person’s role in society, other emphasizing family over society, others emphasizing individual fulfillment regardless of contribution to society (assuming no harm to society, of course).

    I would agree with effectiveness in delivering that societies conception of “human flourishing” as a metric. It might be somewhat circular, so I would want to add others, as listed in my post to Keith.

  11. I just wanted to clarify that I don’t think it would be particularly weird if most people or everyone were wrong in this or that belief they have. After all, nearly everything that nearly everyone has believed at one time or another has turned out false. But the denial of values is a bit different. It’s kind of a form of thought. It’s not itself an empirical claim, it’s rather a claim that a way of being in the world is erroneous. It is, I think, precisely akin to the claim of the epistemological skeptic that there is no external world or no past. It requires the utter dismissal of common sense and ordinary language.

  12. walto,

    I just wanted to clarify that I don’t think it would be particularly weird if most people or everyone were wrong in this or that belief they have. After all, nearly everything that nearly everyone has believed at one time or another has turned out false.

    Good, because that is an important point, and it explains why the “it can’t be true, because then we’d all be wrong” argument doesn’t work. The truth is the truth, regardless of what we currently believe.

    But the denial of values is a bit different.

    Yet again, I’m not denying the existence of values. I’m saying that we aren’t justified in claiming that objective values exist.

    You’ve already stated that there is no “decision procedure” for establishing that something is an objective value. That leaves the other procedure you described: consult your conscience; consult it again some time later; consult others; note whether the answers seem to be culture-dependent or universal.

    That’s not a reliable way of getting at the truth. Look what happens when you apply it to a universal optical illusion:

    1. You see the illusion.
    2. An hour later, you still see the illusion.
    3. A week later, you still see the illusion.
    4. You ask others, and they see the illusion.
    5. The answers don’t vary by culture.

    6. Therefore, you conclude, it isn’t really an illusion at all. You’re perceiving something real.

    The conclusion is obviously mistaken. Why apply the same faulty reasoning to judgments of objective morality?

  13. keiths:

    Perhaps we can agree on this: Once you’ve selected a goal for your moral system, along with a suitable metric for evaluating how well a candidate system achieves that goal, then you can objectively compare moral systems. System A may objectively achieve goal G (as measured by metric M) better than system B does. In other words, system A is objectively better at achieving goal G than system B is, though not objectively better in an absolute sense.

    The choice of goal G, however, is inherently subjective. There is no objectively “right” goal. One person may see human flourishing as the goal. Another may target fairness. A third may settle on “conformance to Sharia law”.

    Bruce:

    I don’t agree with “inherently subjective”, if that means personal opinion. I think there are rational and hence non-subjective arguments for selecting certain types of measures.

    Be careful. “Subjective” doesn’t imply “irrational”.

    There are some rationalluy-justifiable things we can look at assuming we can agree on a basic definition of what morality is for.

    Sure. As I said:

    System A may objectively achieve goal G (as measured by metric M) better than system B does. In other words, system A is objectively better at achieving goal G than system B is, though not objectively better in an absolute sense.

    The choice of the goal G, or as you put it, the ” basic definition of what morality is for”, is ultimately subjective. One person might think fairness is all-important, while another thinks that overall human flourishing is what really matters. Neither of them can be shown to be objectively right or wrong.

    Now what could be measurements could be rationally argued for?

    Longevity and competitiveness with other societies might be two. What would be the point of building rules for something that you don’t think can fulfill its function well enough last as long ?

    Maybe you don’t care, because you think Jesus is returning any day now. Maybe you don’t think it matters if society falls apart, because your reward is in heaven. Maybe you think that the norms are absolute, and that it’s the job of societies to conform to the norms, rather than for the norms to support the longevity of societies. There are tons of possibilities, and many of them are no less rational, given their goals, than the ones you are proposing.

    A set of norms can only be rational or irrational with respect to the goals it is intended to promote, and the choice of goals itself is ultimately subjective.

    ETA: Whether people free to move choose to leave or enter the society could be another.

    That would be a valid criterion only if your goal was to encourage citizens to stay (or get them to leave)! If your goal was to build an ideal Islamic society, then absolute adherence to the Koran would be far more important than the citizens’ satisfaction and their corresponding desire to stay or leave.

    Consistency with the anthropological history of humanity might be another. Kitcher argues that the scientific history of primate society indicates societies which increased altruism seemed to be the evolutionary path that humanity was part of, that this fact led to the start of the “moral project” (ie creation of moral frameworks) and that therefore “increasing altruism” should a measure of the functional success of a framework.

    It is equally possible to argue, as some Randians do, that altruism is immoral. It’s a disagreeable view, and many people who think they espouse it would discover otherwise if it were actually implemented, but it is not in itself an incoherent view.

    I’d also argue that effectiveness in delivering that moral framework’s conception of “human flourishing” is another measure of the success of the process of deriving it.
    I do think that “human flourishing” is a general name for something different moral frameworks may define differently (see my post to KN), so I see a need to include this.

    Sure. System A may achieve “human flourishing” better than system B, according to some metric. In that case system A is objectively better at satisfying that goal than system B is. It is not, however, objectively better in an absolute sense.

  14. keiths,

    Again, if you insist there must be a scientific decision procedure, you will not get to values. And those personal things you concede ARE around, aren’t what most people would call values–they’re preferences. It’s very easy to tell what those are. We don’t fight about the food ones–so the theory makes some sense there. We DO fight about the moral ones, which suggests we’re actually talking about something else.

  15. keiths:
    keiths:

    Be careful. [,,,]

    The choice of the goal G, or as you put it, the ” basic definition of what morality is for”, is ultimately subjective. […]

    Keith:
    I want to be clear that the goal is to built rules for making human societies work without appealing to theology. Full stop.

    Not to build “fair” societies, not to build “Islamic” societies.

    Now I did say originally that such rules must involve “human flourishing”. I am backing away from that somewhat based on thinking it through more and saying the rules for building frameworks themselves will help define “human” and “flourishing”.

    If pragmatic analysis is accepted, it comes down to how one judges the effectiveness of the process to create those rules. I’ve suggested some measures of that effectiveness. I have tried to provide some rational justification for the rules for measuring the effectiveness of moral frameworks. (Just to be clear again: here “rules” means the way we assess effectiveness of a moral framework. It does NOT mean the “rules” within that framework. The latter are the moral norms. And, there are also rules about how to build frameworks!).

    It is true that someone could suggest others ways to evaluate effectiveness, as you have done in some of your comment and then one would need to engage in a rational argument to compare them. So does that make the whole thing non-objective?

    My arguments have always been that the objectivity is comparable to that in science, not that it is absolute. So consider how we judge the effectiveness of scientific norms.

    I suggest that we appeal to effectiveness in prediction in scientific experiments and effectiveness in building technology.

    But just like you can find arguments against my rules for judging effectiveness of processes to build moral systems, people have found arguments against those rules for measuring effectiveness of science as a process. People like WJM can argue that what matters is effectiveness to them by their personal metrics; what works for others matters only insofar as it contributes to that. Or scientific cultural relativists argue that what matters is how science meets non-scientific cultural pictures of the world, and these measures are more important.

    Or for a different type of argument, consider psychology, which I am going to assume is a science. Look at prediction and technology as measures of effectiveness. There are arguments about the statistical analysis which affect how to evaluate predictions in psychological experiments. Or if we look at the technology measure of effectiveness, then one can argue that psychology has produced almost no useful technology (eg treatment for disease).

    My point is not to say which group is right or wrong in the above controversies, only to draw an analogy to the type of arguments you make for assessing objectivity of the effectiveness of processes to build moral frameworks and the status of the measures of effectiveness for scientific frameworks.

  16. BruceS:

    I wanted to mention that for me the the long, detailed posts like the previous are more of an intellectual exercise. I don’t think this is how modern societies set moral standards.

    Rather, setting the day-to-day setting of moral standards is political, which will involve both rational argument and emotional appeal.

    But just as philosophy of science can be a helpful intellectual exercise without necessarily being of day-to-day use in doing science, I think there is merit in trying to understand what a moral framework could be, even though that knowledge may have only limited and indirect impacts on the current moral challenges in a society.

  17. BruceS: …emotional appeal…

    How deeply and how often the emotions become the major factor in politics never ceases to amaze me; the most recent example being the imminent vote on Scottish devolution independence.

  18. For anybody interested, I just uploaded to Scribd some excerpts from an E.M. Adams paper (from the mid-1960s) that defends value realism from a position of linguistic idealism derived from Witt’s Tractatus.

    It can be found here: http://www.scribd.com/doc/239913772/Excerpts-From-Adams-A-Defense-of-Value-Realism

    I do think, however, that these ontological positions are basic and that there is a limit to the extent that they can be reasonably supported or denied. We can engage over whether the position is coherent, internally consistent, jibes with common sense and contemporary science, etc. (all areas in which religious views tend to fail miserably, btw), but once we’ve done that, it’s as much a matter of picking what feels nice as arguments pro or con, I think.

  19. Alan Fox: How deeply and how often the emotions become the major factor in politics never ceases to amaze me; the most recent example being the imminent vote on Scottish devolution independence.

    I just finished a book about participants in the 1995 Quebec referendum on its role in Canada (or not), which was a very close call indeed, although the question was much more ambiguous than the Scottish one.

    Most of the participants commenting for the book made the point that the federalists lost all of their lead midway in the campaign when Lucien Bouchard took over the Yes leadership role.

    He had lost a leg due to that flesh eating bacteria, and people had emotional sympathy for his bravery and sacrifice (to get involved in politics again).

    He was a very good speaker, but the fact that he had to walk with a cane to the speaking podium really helped sell his following speech.

  20. walto: We can engage over whether the position is coherent, internally consistent, jibes with common sense and contemporary science, etc. (all areas in which religious views tend to fail miserably, btw), but once we’ve done that, it’s as much a matter of picking what feels nice as arguments pro or con, I think.

    I think you’ve described a general limitation shared by all philosophy. The ability of philosophy to analyze the internal coherence of statements without saying much about their (lowercase) truth. This was rather amusingly addressed by Einstein.

  21. Bruce,

    I want to be clear that the goal is to built rules for making human societies work without appealing to theology. Full stop.

    But in that case you have already selected your goal. Remember, the issue at hand is whether the goal can be selected objectively.

    I said it couldn’t:

    The choice of goal G, however, is inherently subjective. There is no objectively “right” goal. One person may see human flourishing as the goal. Another may target fairness. A third may settle on “conformance to Sharia law”.

    You disagreed:

    I don’t agree with “inherently subjective”, if that means personal opinion. I think there are rational and hence non-subjective arguments for selecting certain types of measures.

    Once a goal has been selected, I think we agree that comparisons can proceed objectively. Here’s how I put it earlier:

    Sure. System A may achieve “human flourishing” better than system B, according to some metric. In that case system A is objectively better at satisfying that goal than system B is. It is not, however, objectively better in an absolute sense.

    You asked:

    It is true that someone could suggest others ways to evaluate effectiveness, as you have done in some of your comment and then one would need to engage in a rational argument to compare them. So does that make the whole thing non-objective?

    No. The selection of the goal is subjective, but having selected a goal (and a suitably unambiguous metric), the rest of the comparison can be done objectively.

    The end result is that we can say that system A is (or isn’t) objectively better than system B relative to goal G, as measured by metric M. We can’t say that system A is (or isn’t) objectively better than system B in an absolute sense.

  22. The procedure of mine you quoted above [“the procedure you did describe isn’t reliable”] wasn’t intended to be all-inclusive. There are other things too–look at it the situation from different angles, substitute other individuals/cases in like circumstances, ask wise people, *ETC.*

    But I concede that no such procedure can ever be completely dispositive–it’s just testing for coherence with other emotional responses–those of our own and those of others.

    The thing is, that’s ultimately true of scientific claims too. They are also “unreliable” in the sense that any one of them–no matter how apparently well grounded–is subject to revision or denial as a result of new experiments. Our lives are stuck in uncertainty in both realms. So, the central issue isn’t that. It’s whether emotional responses are intentional–the way we believe perceptual experiences are. That is, does a reaction of approbation or disapprobation have an object in the way nearly all of us believe that perceptual experiences have objects, or are such reactions, in a sense “free floating”? (only seeming to refer to something (values) but not actually doing so). The (objective) value denier is akin to the phenomenalist who says “In the end, all there really are are sense-data: it’s a mistake to infer anything from the existence of fleeting images.” I think that picture is wrong in both cases.

  23. keiths,

    All this is question-begging. It assumes that, in order to be real (or “objective”) values must be determinable by scientific methods. If they could, they wouldn’t BE values. In this passage, you make clear that what you mean by “objectively” is simply determinable by some empirical metric:

    The selection of the goal is subjective, but having selected a goal (and a suitably unambiguous metric), the rest of the comparison can be done objectively.

    The end result is that we can say that system A is (or isn’t) objectively better than system B relative to goal G, as measured by metric M. We can’t say that system A is (or isn’t) objectively better than system B in an absolute sense.

    IMO, such a restriction of objectivity (what may be true or not regardless of what anybody thinks), is unjustified verificationism. Goldbach’s Conjecture is true or false, whether or not anybody can prove it.

  24. To play the idiot for a moment:

    *looks round defiantly*

    Ethics and morality only matter because we are are social animals. Rules for living are superfluous if you live alone on a desert island. However you come at it to derive your system of ethics, it will always involve how you treat other people.

  25. walto,

    IMO, such a restriction of objectivity (what may be true or not regardless of what anybody thinks), is unjustified verificationism. Goldbach’s Conjecture is true or false, whether or not anybody can prove it.

    Of course, and I have never stated otherwise. Likewise, objective values either exist or they don’t, regardless of whether anyone can prove their existence or non-existence. Ontology and epistemology are distinct.

    I am arguing that we should have a basis for the assertions we make. I have good reasons for asserting that my monitor exists, and I do assert it, though I might be mistaken. I don’t have good reasons for asserting that objective morality exists, and no one has provided any, so I don’t assert it.

    I don’t have good reasons for believing that Martha Stewart is the Sultan of Brunei in drag, so I don’t assert it. I don’t have good reasons for believing that woodpeckers peck their way through crankcases to drink the motor oil, so I don’t assert it. I don’t have good reasons for believing that objective morality exists, or that X is objectively moral or immoral, so I don’t assert it.

    Given that there isn’t a “scientific decision procedure” for determining that something is an objective value, and given that the procedure you did describe isn’t reliable, then on what basis does it make sense to claim that objective morality exists, or that some particular X is or isn’t objectively moral?

    And if there is no reason to assert it, then why assert it?

  26. keiths:
    But in that case you have already selected your goal.Remember, the issue at hand is whether the goal can be selected objectively.

    Keith:
    No, that is not the issue I have tried to present.

    All I am saying is that we have similar reasons to call a moral system objective as we can to call a science system objective. I’m not arguing about objectiveness per se. I have detailed that analogy in previous posts.

    The arguments in your post regarding moral systems can be reworked and applied to science. I did that a lot of that already in my previous post.

    Walt makes similar points in his latest posts.

    Now I do agree with KN that in the end, even after discussion, we may agree to disagree about how to measure effectivenss, just as you and WJM never came to agreement about science. Both parties to that argument may think they “won” but that is not the point I see KN making.

    Now when it comes to such arguments, I do think that it might be easier to resolve a discussion if it is about readily understandable metrics, like “history has shown people want to move to such societies” rather than about abstractions like “moral norms should be universalizable” (a metanorm).

    The same reasoning applies to science. It is simpler to discuss “working medical treatments” (a metric for measuring the effectiveness of scientific norms) than to discuss the technicalities involved in ” tested by randomized, controlled experiments with proper statistical analysis” (a norm for scientific theories themselves).

  27. Alan Fox:
    To play the idiot for a moment:

    Ethics and morality only matter because we are are social animals. Rules for living are superfluous if you live alone on a desert island. However you come at it to derive your system of ethics, it will always involve how you treat other people.

    Sure, but that is just a starting point.

    How do you build rules for doing that? Is there sense in which that can any such process can be called objective?

    My argument is about comparing the way we build science theories to the way we build moral norms in naturalized ethics, and arguing the standards of objectivity are analogous.

  28. Bruce,

    We agreed long ago (in blogtime) that two systems can be compared objectively with respect to an already-chosen goal. The remaining issue is whether the choice of goal is itself objective.

    I stated that it isn’t:

    The choice of goal G, however, is inherently subjective. There is no objectively “right” goal. One person may see human flourishing as the goal. Another may target fairness. A third may settle on “conformance to Sharia law”.

    You disagreed:

    I don’t agree with “inherently subjective”, if that means personal opinion. I think there are rational and hence non-subjective arguments for selecting certain types of measures.

    Perhaps it would help if you would give an example of a goal that is objectively better than another, full-stop. I believe that any such goal, when examined carefully, will prove to be subjectively chosen.

  29. keiths:

    Perhaps it would help if you would give an example of a goal that is objectively better than another, full-stop.I believe that any such goal, when examined carefully, will prove to be subjectively chosen.

    Sorry, Keith, but I don’t think we are not communicating.

    I see you as trying to use objectively per se, eg a goal that is objectively better than another.

    I am only trying to compare the objectivity of science and naturalized morality using the analogies in the models which I have tried to point out.

    So I am definitely, double-dog-dare-me-but-I won’t-budge, going to call it a day on this one.

  30. walto:
    For anybody interested, I just uploaded to Scribd some excerpts from an E.M. Adams paper (from the mid-1960s) that defends value realism from a position of linguistic idealism derived from Witt’s Tractatus.

    Thanks for taking the time to upload this.

    I did have a quick read through, but a natural-language analysis types of argument does not resonate enough me to try to understand it in detail.

    I think science has many fruitful things to say about how we experience the world and how the world could be for those experiences, and, of course, scientific models are not natural language nor are their concepts necessarily analyzable that way. But this idea does not seem to be captured by the author’s approach.

    I also see an important place for pragmatic approaches to characterizing reality, eg as in Dennett’s “real patterns”.

  31. keiths,

    Sure there are reasons to assert these things. I’ve mentioned a bunch of them. You think that these reasons are no good because (i) these propositions might not be true anyhow, and (ii) I can’t prove them. I have responded that (i) and (ii) are also true of empirical statements. Our difference is simply that you don’t take what I claim to be the evidence provided by your emotions of approbation and disapprobation as evidence of anything. I do.

  32. Bruce,

    I think we are communicating. I expressed my opinion that the choice of goals for a moral system is ultimately and inherently subjective, and you got the message. You expressed your disagreement, and I got the message.

    Why do I think that the goal choice is inherently subjective? Because if you ask why a particular goal is preferred over another, the reason will be either a) because someone simply feels that it’s better, or b) because it promotes some other goal that is considered important. Ask why that second goal is preferred over the alternatives, and you’ll get a) or b) as an answer, and so on. As long as you get answer b), the regress continues. When you get answer a), it stops. So unless you claim that the regress is infinite (a highly implausible claim), then it must end in a), which is purely subjective.

    Now, walto might wish to replace “a) someone simply feels that it’s better” with “a) someone knows that it’s objectively better”, but I question that for all the reasons I’ve been giving throughout the thread.

  33. walto,

    Sure there are reasons to assert these things. I’ve mentioned a bunch of them. You think that these reasons are no good because (i) these propositions might not be true anyhow, and (ii) I can’t prove them.

    No, I’m not asking for proof. Remember, I don’t think that absolute certainty is possible. All I’m asking for are good reasons to believe that the assertions are true.

    We agree that there is no “scientific decision procedure” for determining that objective values exist or that X is objectively moral or immoral. You offered another procedure, but that procedure is unreliable. It leads to a false conclusion when applied to visual illusions.

    You’ve since added this:

    The procedure of mine you quoted above [“the procedure you did describe isn’t reliable”] wasn’t intended to be all-inclusive. There are other things too–look at it the situation from different angles, substitute other individuals/cases in like circumstances, ask wise people, *ETC.*

    Unfortunately, the augmented procedure still leads to an erroneous conclusion in the case of visual illusions. It isn’t reliable.

    You seem to be arguing that there is something special about values that allows us to trust our intuitions about their objectivity, yet if you follow that reasoning with respect to visual illusions, you reach an incorrect conclusion.

    Furthermore, as I’ve already mentioned, we have good evolutionary reasons to think that our senses (and especially vision, which is arguably our most important sense) are basically reliable, and no such reasons to think that our consciences are reliable indicators of objective morality, since access to objective morality confers no selective advantage. (If it did, that would afford a testable phenomenon, and we’ve already agreed that there is no “scientific decision procedure” for determining objective values.)

    Why should we trust our intuitions about the objectivity of values, if the same approach leads to error in the case of visual illusions?

  34. There’s nothing special about moral intuitions in their provision of prima facie evidence. They’re like all other experiences in that way. That’s how we live and get through the world. Same as senses. I’ve already answered your last question a couple of times. We trust our senses even though they’re sometimes illusory too.

  35. walto,

    We don’t blindly trust our senses. We’re smarter than that, because we know that our senses are imperfect.

    Instead of blindly trusting them, we try to get a feel for how reliable they are, what their weaknesses are, under what circumstances they can be trusted, and when they are likely to fail us.

    You can do that for vision. You can’t do it for your intuitions regarding objective morality (henceforth IORMs).

    1. Vision is known to be basically reliable. You have absolutely no idea whether your IORMs are reliable.

    2. There is selective pressure for reliable vision, but no selective pressure for reliable IORMs.

    3. “Scientific decision procedures” are available for validating vision, but not for validating IORMs.

    4. Vision can be validated against other senses, while IORMs cannot.

    5. The procedure you prescribed for validating your IORMs fails completely when applied to visual illusions, even though vision is known to be basically reliable. You have even less of a reason to expect it to work when applied to your IORMs, yet you trust it.

    Despite all of the above, you are still assuming that your IORMs are basically reliable. It’s a total non-sequitur.

    Here’s another question for you. You’ve already conceded that there are no “scientific decision procedures” for determining whether an action is objectively moral or immoral. (That makes sense, because whether an action is objectively moral or immoral has no effect on its measurable consequences.)

    If there are no measurable consequences, then how — by what specific mechanism — does your conscience gain access to knowledge about objective morality? How do you know that the mechanism is trustworthy?

  36. We don’t “blindly” trust our emotions either–at least the sensible ones don’t. The rest of the early part of your post we’ve discussed ad nauseam. And we must assume our senses are basically reliable also, or we get nowhere with any part of your confirmation procedure.

    Here’s an imperfect analogy to address your question at the end. I think it’s akin to “How do you know your belief is about dancing? Maybe it’s actually about golf? By what specific mechanism can you confidently confirm that what you are thinking about is actually dancing?–i.e., if there really IS such a thing as dancing,” at (objective) values is in that way akin to (though maybe not exactly the same) as “getting at” values.

    Anyhow, your senses are fallible–but in the end, you trust them, as you would if there were no science and as people were required to prior to the existence of science. If they provide no evidence there’s no science to end up confirming or disconfirming them. Your emotions are in much the same (though admittedly somewhat leakier) boat. We either have to trust them or we take a position of skepticism that nobody in the world believes except a few philosophers (who act much differently when they’re doing other stuff themselves). What there isn’t (and can’t be) is a science of values.

  37. I haven’t really thought through the analogy of values to other objects of propositional attitudes I tossed off above, so it may be off the wall. Fortunately, the important point is in there too.

    As we have no direct, incorrigible access to the “real world” we are stuck with coherence with other apparently true stuff as methods of confirming our beliefs or perceptual experiences. The problems with coherence are that it can never guarantee truth and a pile of cohering statements can ALL be pretty obviously false. Evidence provision has to start somewhere; that is, there has got to be some portion of initial warrant provided by “seemings” or we can never get off the ground. IMO, perceptual judgments and value judgments are precisely alike in that matter. Objects seem green, actions seem improper. We may be mistaken in both but we can try to get whatever confirmation is possible for each by whatever methods are available for that task. The methods aren’t precisely the same, but they overlap–we look again later, we ask others how things seem to them, etc. In the case of facts, we can also use scientific means (though they, too, are fallible). Those aren’t available in the case of values. But that doesn’t change the basic picture, which is the search for greater confirmation via coherence with other things. The more you get, the better. That’s true in both worlds, because we can’t get out of that coherence circle with either–it’s the human condition.

    The positivist view that there is some special access to the external world in the case of “verifiable facts” is just confused. There are just additional verifiers around there. Once it is seen that verificationism is mistaken, the view that facts can be objective, while values can’t must fall away. Objectivity is not and can never be dependent on absolute verification. And partial verification is available in both arenas. So we have to choose: shall we be skeptical about everything commonsense suggests to us?–values, physical objects, the past, other minds, causation, etc., or will we allow that our experiences of all of these ostensible entities provide prima facie evidence?

    That’s the point.

  38. walto,

    In other words, moral realism is grounded in the same ‘animal faith’ as direct realism in philosophy of perception. That seems right to me as an explication-of-the-manifest-image claim. John McDowell makes this point rather nicely (if not clearly). And that leaves entirely open what the ontological status of moral values turns out to be in rerum natura.

    I think that there are fairly compelling reasons for thinking that direct perception of situations and events as “containing” grounds for moral description and evaluation is more plausible than regarding ourselves as directly perceiving bare facts and then positing values as somehow hanging off of them, or directly perceiving sensa and then posit physical objects as their causes, with the moral values somehow hanging off of them.

    The fact that there’s some diversity in moral judgments (though how much there is is itself an empirical question) shows us that moral judgments are less constrained by ‘the world’ than, say, perceptual judgments are — it doesn’t show that moral judgments aren’t constrained by ‘the world’ at all. But what constrains moral judgments is, first and foremost, social reality (rather than physical reality).

  39. Kantian Naturalist:
    walto,

    In other words, moral realism is grounded in the same ‘animal faith’ as direct realism in philosophy of perception.That seems right to me as an explication-of-the-manifest-image claim.John McDowell makes this point rather nicely (if not clearly).And that leaves entirely open what the ontological status of moral values turns out to be in rerum natura.

    That seems a nice (Santayanaesque) way of putting it. Thanks.

    I think that there are fairly compelling reasons for thinking that direct perception of situations and events as “containing” grounds for moral description and evaluation is more plausible than regarding ourselves as directly perceiving bare facts and then positing values as somehow hanging off of them, or directly perceiving sensa and then posit physical objects as their causes, with the moral values somehow hanging off of them.

    Right. I’m tempted to say that what is “directly perceived” isn’t the kind of thing that can be “known.” The “bare facts” or “raw feels” have to be there in some sense or we’re not empiricists. But they aren’t really “knowable”: once organized, etc., they aren’t bare or raw anymore, and knowledge requires that kind of organization (etc.)

    The fact that there’s some diversity in moral judgments (though how much there is is itself an empirical question) shows us that moral judgments are less constrained by ‘the world’ than, say, perceptual judgments are — it doesn’t show that moral judgments aren’t constrained by ‘the world’ at all.

    I like that.

    But what constrains moral judgments is, first and foremost, social reality (rather than physical reality).

    I agree that it’s not physical reality that does the constraining–or there COULD be a science of values, but can you flesh out what you mean by “social reality” a little more? Are you taking the cultural relativist position that it’s the societal norms that makes an action good or bad? Thanks.

  40. walto,

    The rest of the early part of your post we’ve discussed ad nauseam.

    Actually, we haven’t discussed it, and that’s the problem. I keep pointing out the vast differences between your senses, whose reliability can be assessed, and your intuitions regarding objective morality (IROMs), whose reliability cannot be assessed. You keep ignoring those important differences.

    The situations are not at all symmetrical.

    And we must assume our senses are basically reliable also, or we get nowhere with any part of your confirmation procedure.

    We don’t have to assume that our senses are basically reliable. That is something we can demonstrate to ourselves.

    I see my monitor in front of me. I can reach out and touch it. If I stick my nose over it, I can smell the warm electronic smell wafting up through its vents. I can feel its weight when I lift it, and I can hear the sound it makes when it turns on. I can photograph it and examine the photo. I can measure the power it draws from my electrical system.

    And it isn’t merely that all of these tests show that the monitor is there. They are also coordinated. When I reach out to touch the monitor, my hand feels the contact at the same time that my eyes see it happening. I hear the sound of the monitor turning on at the same time I feel the power button depressing, at the same time I see my finger pushing it, at the same time that my watt meter shows a jump in power consumption.

    In order to assert that the monitor wasn’t really there, I would have to believe not only that all of my senses were wrong, and not only that the photograph and the power measurement were wrong, but that they were all wrong in a tightly coordinated and choreographed fashion. It beggars belief.

    Now try the same thing with your intuitions regarding objective morality. Assert that objective morality doesn’t exist. What happens? You don’t have to believe that your other senses are defective. You don’t have to throw out any scientific measurements. You don’t have to assume an implausible conjunction of tightly coordinated and choreographed errors.

    You simply have to acknowledge the fallibility of your intuitions. That’s all. It’s totally plausible.

    The two situations could hardly be more different.

    Here’s an imperfect analogy to address your question at the end. I think it’s akin to “How do you know your belief is about dancing? Maybe it’s actually about golf? By what specific mechanism can you confidently confirm that what you are thinking about is actually dancing?–i.e., if there really IS such a thing as dancing,” at (objective) values is in that way akin to (though maybe not exactly the same) as “getting at” values.

    That doesn’t make sense. We can think about things like unicorns, but that doesn’t demonstrate their existence or validate our intuitions regarding them.

    Anyhow, your senses are fallible–but in the end, you trust them, as you would if there were no science and as people were required to prior to the existence of science.

    You don’t need science. All of the tests I described above, with the exception of the photo and the power measurement, can be done without it.

    Your emotions are in much the same (though admittedly somewhat leakier) boat. We either have to trust them or we take a position of skepticism that nobody in the world believes except a few philosophers (who act much differently when they’re doing other stuff themselves).

    If you feel happiness because you think you are about to get a raise, then you feel happiness. It doesn’t mean that you are actually going to get a raise.

    If you feel anger because you think someone has wronged you, then you feel anger. It doesn’t mean that the person has actually wronged you.

    If you feel that something is objectively immoral, then you feel that it is objectively immoral. It doesn’t mean that it actually is objectively immoral, and it doesn’t mean that objective morality exists.

  41. KN,

    The fact that there’s some diversity in moral judgments (though how much there is is itself an empirical question) shows us that moral judgments are less constrained by ‘the world’ than, say, perceptual judgments are — it doesn’t show that moral judgments aren’t constrained by ‘the world’ at all.

    It also doesn’t show that they are constrained by ‘the world’, including by any objective values that exist ‘out there’. That’s been my point throughout the thread.

    There simply aren’t any good reasons to accept our intuitions regarding objective morality as reliable.

    But what constrains moral judgments is, first and foremost, social reality (rather than physical reality).

    Then you are at odds with walto, who thinks that our moral judgments are constrained (albeit imperfectly) by objective morality, which does not depend on society for its existence.

  42. keiths,

    Your monitor I actually HAVE already discussed, and the unicorn is a red herring. (Maybe unicornicity wouldn’t be but it doesn’t matter.) It’s just repetition now.

  43. walto: I’m tempted to say that what is “directly perceived” isn’t the kind of thing that can be “known.” The “bare facts” or “raw feels” have to be there in some sense or we’re not empiricists. But they aren’t really “knowable”: once organized, etc., they aren’t bare or raw anymore, and knowledge requires that kind of organization (etc.)

    I’ve started reading Noe’s Action in Perception, and he argues in convincing detail (both phenomenological and neurophysiological) that sensations are not sufficient for perception — rather perception requires non-propositional know-how, embedded in the relevant sensorimotor skills, of how to understand or make sense of sensory stimuli. Whether that’s a different kind of knowledge from “propositional knowledge” is the very question I’m working on right now. More precisely, I’m trying to figure out how to distinguish between non-propositional knowledge and propositional knowledge without committing “the Myth of the Given”. I made a stab at this in my book, but I’m not happy with it.

    walto: I agree that it’s not physical reality that does the constraining–or there COULD be a science of values, but can you flesh out what you mean by “social reality” a little more? Are you taking the cultural relativist position that it’s the societal norms that makes an action good or bad? Thanks.

    Cultural relativism is one of those things that I fervently hope is false, but I worry that it’s true. I don’t think that it makes any sense to talk about moral values in the total absence of some society or other (though not necessarily a human society — whether bonobos or dolphins have moral values is a very interesting question!).

    But I also don’t think that moral values are “subjective” in the technical sense of “subjective” that’s part of the lingua franca of professional philosophy, because what makes a moral judgment correct or incorrect depends, in part, on whether that judgment is part of a family of concepts that tends, when acted upon consistently, to assist in the cultivation of capacities of either the person making the judgment or the person to whom the judgment is applied. So there are facts that (partly) ground the correctness or incorrectness of moral judgments, but those facts are facts about what human beings, in general, need in order to flourish.

  44. keiths: Then you are at odds with walto, who thinks that our moral judgments are constrained (albeit imperfectly) by objective morality, which does not depend on society for its existence.

    If that is indeed Walter’s position, then yes, he and I would be on different sides of this particular issue.

  45. Kantian Naturalist: If that is indeed Walter’s position, then yes, he and I would be on different sides of this particular issue.

    We are indeed. I might go with biology as somehow being the arbiter or genesis or something, but I don’t see culture. Anyhow, I have to admit that, based on your comments above, I’m guessing your position is somewhat more nuanced and carefully thought out than mine is.

    I’d like to read your book some day.

  46. walto,

    If you don’t have any counterarguments to offer, I’m happy to leave the discussion where it stands:

    1) We’ve agreed that you can’t scientifically demonstrate the existence of objective values, or scientifically determine that something is objectively moral or immoral.

    2) The only (non-scientific) procedure you’ve offered for answering those questions is flawed, so that when it is applied to vision, it wrongly concludes that some optical illusions aren’t illusions at all.

    3) It has even less hope of working when applied to the conscience, because there is absolutely no selective pressure toward an objectively accurate moral sense, and unlike vision, which can be cross-checked against other senses, the conscience cannot be validated that way.

    4) `Your claim that we must assume the reliability of all our senses, including the conscience, is incorrect, and I have shown in detail that (and how) we can validate the senses (such as vision), but not our conscience’s ability to access objective morality.

    5) You haven’t identified a flaw in my argument.

    Despite all that, you are still maintaining that we can assume the basic reliability of our consciences.

    It reminds me of an experience I had a couple of decades ago. The Monty Hall Problem had just gone viral, and I was explaining it to my roommate. He was a semiconductor technician, a smart guy with a very good grasp of math.

    I explained in detail why switching doors was twice as good on average. He followed my logic, and he didn’t have a counterargument, but he simply could not let go of the intuition that the odds must be 50-50. I had to actually play the game with him about 30 times before he began to believe that switching was the right choice, and even then he continued to have doubts, so strong was the (erroneous) intuition.

    I think you may be in the same boat. Your intuitions regarding objective morality are too strong to let go of. I believe you realize (intellectually, at least) that the strength of an intuition is not an indicator of its correctness, and that you haven’t given any other good reasons for trusting your conscience in this way. But like my roommate, you just can’t let go of the erroneous intuition.

  47. All asked and answered. What actually hasn’t hasn’t been in this thread are any of my arguments. You prefer to simply repeat the same attacks that have been replied to over and over.

    Of course I have identified “flaws” in your argument–I’ve pointed out several times that it’s based on a mistaken verificationist assumption that factual of corroborations may be dispositively verified while value corroborations cannot. I know me and all my friends here and elsewhere may be wrong however often we check, what you don’t get that you and all your friends and science can be wrong no matter how many times you check.

    What I can’t do is either convince you of my position or disprove yours, which have never doubted may well be true. I just agree–in how I live when I’m not doing philosophy– with almost everybody in the world that it isn’t. Similarly, you can’t convince me or disprove my objectivist view. That is the nature of basic philosophical disagreements, I’m afraid. This impasse, clearly, irks you no end, so as usual, you get obno. Your Ptolemy and Monty Hall references, like your unicorn one, are red herrings. As I’ve already explained (and even supplied a link so perhaps someone else could explain it further–since you didn’t seem to me to be getting it), the view I’m pushing isn’t a matter of this or that belief being true because it’s commonly held–it’s a matter of whether an entire form of expression makes any sense. And again, your claim that your preferences ARE values is no help to discussions of good and bad. Subjectivism is not a new position–its defenders generally understand that they have work to do to explain moral discourse. You?–you simply repeat the same obvious remarks that I could be wrong when I think that some action is OK–even if I ask my friends. No shit! Really???

    In sum, what is clear is that you will simply repeat your denials that there can be any objective values based on the fact that any value judgment may be wrong, and that you will not address any of the objections to subjectivism that have been leveled not only by me but by many others for many years. Like most discussions with you, it’s takes about five posts for it to get unconstructive and annoying. (I do thank you for resisting powerful urges you likely have had to misquote or link stuff out of context. That at least is a positive sign. Perhaps you noticed that KN and I can disagree regarding cultural relativism without one of us calling the other one a Sun worshiper. That might be another good thing for you to try. It’s possible to disagree without always being repeating your confident assertions that anybody who disagrees with you must be wrong and won’t admit it.)

    So, I guess I’ll join Bruce in lurking when you post. You certainly have interesting things to say….except when anybody disagrees with you.

  48. Kantian Naturalist: So there are facts that (partly) ground the correctness or incorrectness of moral judgments, but those facts are facts about what human beings, in general, need in order to flourish.

    It is old news, but here is a nice summary about how a lot of the received view on human psychology may just be the psychology of people brought up in a market society with a lot of buildings with four walls and corners. (These two in reference to ideas of fairness to strangers and certain optical illusions).

    Of course, psychology is about what is and morality is about what should be.

    But naturalistic morality needs an understanding of what people are, and the details of that understanding may not be what we (or at least some of us) thought.

Leave a Reply