Imagine a woman with her newborn child. The woman is negligent on a hot day and leaves the infant in a locked car, where he dies from the heat. When the woman finds out a few hours later, she enters a psychotic break, convincing herself that the death was her husband’s fault. When the mother’s husband gets home that night, she murders him. The mother’s mental state was affected by blind rage after discovering her child’s death. After the murder, she regrets her actions but is still blamed by those around her for the death of her family. This raises the question: Is she being rightfully blamed? Why should the mother have to take responsibility for her vicious belief that led to the ensuing tragedy?
Responsibility comes in many forms. If we were talking about actions, all sorts of reasons can lead to someone having to take responsibility since several external factors would also need to be considered. One possible case is the example, where responsibility for the action was attributed to the mother as a consequence of blame. For beliefs, where the stakeholders are limited to the believer and the belief itself, the possibilities of responsibility are also limited. Most commonly, one would take responsibility for a belief if they permitted the belief, or in other words, let the belief manifest voluntarily. Thus, the voluntariness of a belief and the responsibility of a belief are co-extensive. Assuming this, proving that beliefs are voluntary would also prove that beliefs should be held responsible for.
Doxastic Involuntarists convincingly assert that beliefs are involuntary because it’s conceptually impossible for beliefs to be voluntary. This argument posits that beliefs in essence aim for the truth; accordingly, we can only sincerely believe things that we perceive to be true based on our known information. If beliefs were voluntary, I’d be able to believe anything by sheer willpower. This is evidently not the case; I can’t sincerely believe that “I am an ancient Athenian” just by wanting to believe “I am an ancient Athenian,” since I know that the belief is contradictory to the truth. By contrast, I might believe that it is raining if I look out the window and see rain pouring down: the sight of the rain provides evidence to convince me that it is raining. A belief can’t be a result of both the striving for truth and sheer will, as the two often contradict. Since we know the former is true, we know that will plays no part in beliefs, meaning beliefs can’t be voluntary.
However, not all examples seem to match this theory. In The Redemption of Time, the 4th canonized entry in Cixin Liu’s Remembrance of Earth’s Past Trilogy, the Lurker, a cosmic entity who has been planning to bait Master, his lifelong enemy, into a death trap, rejects the belief that his trap has failed. This is because if he accepted the fact, he would also need to accept the unfavourable outcome that involves him losing an eon-spanning war. When the signs of his plan working don’t come, the author describes the Lurker’s “peerless intelligence [as] capable of coming up with the answer, but he had avoided thinking about it. It was such a terrifying possibility,” that Master might have seen through his plan and come up with a countermeasure. Even though the Lurker’s near-perfect logical abilities have already produced a rational truth, an indubitable base for a belief, they are ignored by the Lurker due to his fear. In other words, the Lurker successfully voluntarily believes the untruthful idea with sincerity over the truthful idea, actively deceiving himself. The Lurker has thus created an exception to Doxastic Involuntarism’s conceptual argument, as his will has overridden truth; How is this possible?
The process of self-deception isn’t entirely intuitive and should foremostly be doubted. The lurker had good reason to believe that his plan had failed: he had predicted a time frame—during which he should’ve seen a signal indicating the trap’s success—which had already passed. By contrast, the main cause that convinced the Lurker that he was safe and that Master was merely taking longer than expected seems a lot more negligible: the thought merely instilled fear in him. Somehow, he pushes aside his logical reasoning and believes with more sincerity in his emotions. This should be impossible, as it would suggest the Lurker has an external mind that acts separately from his “peerless intelligence” to have deceitful beliefs since he has already rejected the mind that had the truthful belief.
Deception implies that the deceiver can theorize multiple conclusions, at least one of which is truthful whilst the rest is deceitful. Only then is the deceiver able to intentionally deceive by conveying a deceitful conclusion. Similarly, the deceived is implied to not know the truthful conclusion, so that the deceitful conclusion can become the perceived truth. We can see a paradox here, as the Lurker can’t fill the role of the deceived because he already knows the truthful conclusion as also the deceiver, meaning self-deception itself is conceptually impossible.
The likely explanation is that the Lurker performed a subconscious weighing of the relevant key factors. As previously discussed, the production of a perceived truth is through perceived evidence. In the Lurker’s case, the perceived truth should have objectively been that he was in danger. However, because that conclusion had put his emotions at risk, his bias for his immediate self-preservation gave the illusion that the prevention of fear was a significant factor in his reasoning. He chose to prioritise the more imminent danger of feeling a negative emotion over the less imminent danger of death, a mistake caused by ignorance according to Socrates in Plato’s Protagoras. Thus, this account still fits within the requirement that a belief must aim for the acquisition of perceived truth.
Similarly, In Clifford’s famous example in The Ethics of Belief, a shipbuilder is tasked with deciding whether his ship is seaworthy. He knows that the ship is old and poorly constructed, leading to many of his peers expressing concerns. However, the notion that he would have to spend a large sum of money repairing the ship made him sad, so he “put his trust in providence” and sent the ship out to sea. Despite many good reasons that the ship wasn’t seaworthy, the shipbuilder still undergoes “self-deception,” an indicator of voluntariness. In both examples, the “self-deception” is caused by a factor not usually considered in rational decision-making, personal happiness. This gives us a reason for believing other than the belief being a perceived truth, emotions. In other words, beliefs can come from prudential considerations detached from evidential considerations.
It should be said that relying on prudential considerations is usually considered a mistake for good reason. Take the ratio of the mass of a proton to the mass of an electron, around 1:1,836. Despite being considered counterparts in an atom, the mass that electrons contribute is negligible compared to that of protons, just like the weight that prudential considerations provide in an argument in the face of evidential considerations. It’s important to consider emotion, but it should never overweigh cold hard evidence in rational decision-making. This is why the argument would only work if we limit prudential considerations to a premise, which states that they should only be considered rational on the same level as evidential considerations if evidence—that is known—is insufficient. Only when evidential considerations aren’t present are the weight of prudential considerations substantial enough to sway the decision.
A better example to illustrate prudential considerations is the argument for Pascal’s Wager, which argues that it’s better to believe in God than to not believe in God because the former grants eternal joy at best and nothing at worst, while the latter grants nothing at best and condemns you to eternal suffering at worst. In this choice, there’s insufficient known evidence supporting or denying the existence of God, meaning it’s impossible to assess the validity of both beliefs. However, prudential considerations look at something else, the results of the belief. Two people who respectively believe in God and don’t might live relatively similar lives unobstructed by their beliefs, but the one believing in God is theorised to be happier because he has potential eternal joy to look forward to. Meanwhile, the one who doesn’t believe in God wouldn’t get this boost of happiness. Furthermore, he might be unhappy due to the potentially looming eternal suffering. Because of this, the person believing in God has made a more rational decision than the person who doesn’t believe in God. The Lurker and the shipbuilder being held responsible also aren’t a result of self-deception, but prudential considerations, just less rationally. However, Prudential considerations still don’t fully solve the dilemma regarding the conceptual argument, which might be talking about something slightly different.
The conceptual argument for Doxastic Involuntarism relies on beliefs as such, an ideal state of beliefs that presents a perfectly capable being with at least some concrete evidence to make the belief. In reality, this is almost never the case, which is why most of the examples exist yet don’t contradict with the argument: The Lurker had perfect capabilities, but incomplete evidence; the shipbuilder had indisputable evidence, but was misled by his ability to err; and Pascal’s Wager didn’t even have any evidence, making it almost completely detached from the natural state that a belief is intended to be. This means we have to prove responsibility for two different definitions of belief, the normative beliefs of the conceptual argument and the empirical beliefs of the real world.
Descartes proposes an argument in his Meditations, declaring that voluntary beliefs are conceptually possible. He asserts, “…for the faculty of will consists alone in our having the power…to pursue or shun those things placed before us by the understanding…” In other words, a will exists to make decisions based on understanding. This is preceded by the fact that understanding is provided by God, who can’t possibly make an error, meaning any mistake made in decision-making is made by the will and is thus voluntary. As long as it’s possible for mistakes to be made, voluntary beliefs are conceptually possible. The antecedent is discussed in Plato’s Protagoras, in which Socrates theorises that mistakes happen due to a skewed perspective of short-term benefits and long-term benefits. This primarily shows ignorance in people, but inadvertently proves the likelihood of mistakes. Doxastic Involuntarism’s conceptual argument fails to take into account this human ability to err.
Empirical beliefs tend to be clouded by the aforementioned motive, prudential considerations. You’ll likely see anecdotally that everyday decisions are made with arbitrary personal reasons in mind rather than with full-blown analyses. The reason for this isn’t entirely clear, but there are some possible factors: convenience, as it requires less work to realize what you want than to figure out what’s objectively optimal; faith, such as the shipbuilder’s trust in providence; or in the case of the Lurker, an innate principle of self-preservation that makes us reluctant to experience negative emotions. Furthermore, any full analysis of a scenario without error is so unlikely that it would be negligible in an empirical debate. Prudential considerations are far more common than evidential ones in empirical beliefs, meaning once again the beliefs are voluntary.
There are entire realms of argumentation that this essay doesn’t consider; for example, a different interpretation of what it means to take responsibility, which is possible in certain rare circumstances, would induce a scenario where voluntariness and responsibility aren’t always co-extensive. However, when considering likelihood and eliminating vastly extraneous possibilities, such as the notion of perfect knowledge in an empirical belief, it can confidently be concluded that beliefs are voluntary and thus should be held responsible for. This is true for normative beliefs, where erring is not only possible but likely, and empirical beliefs, where prudential considerations are viewed more highly than evidential considerations.
References
Clifford, William Kingdon. The Ethics of Belief. London: Contemporary Review, 1877. Descartes, René. Meditations on First Philosophy. 6 vols. London: Cambridge University Press, 1996. James, William. The Will to Believe, 1894.
Protagoras, Plato, Friedrich Jacobs, Johannes Samuel Kroschel, Valentin Christian Friedrich Rost, and Gottfried Stallbaum. Protagoras, 1875.
Vitz, Rico. “Doxastic Voluntarism.” Internet encyclopedia of philosophy. Accessed June 3, 2024. https://iep.utm.edu/doxastic-voluntarism/#H3.
Williams, Bernard. Problems of the self. Cambridge, GBR: Cambridge University Press, 2011.
By Pinsong Sun