FILOSOFIJA. SOCIOLOGIJA. 2022. T. 33. Nr. 1, p. 49–56 © Lietuvos mokslų akademija, 2022
Moral heuristics are methods that serve the purpose of reducing the effort associated with moral desion making. The purpose of this article is to create a prescriptive model of moral heuristics usage. It provides that the heuristic’s efficiency depends on the problem that we are solving by using heuristics, the environment in which they are used, and the moral standard of the decision-maker. Accordingly, for the effective use of heuristics, it is necessary to, first, use heuristics that are relevant to problem situations; second, heuristics should be used in the environment in which their effectiveness is established; third, the choice of heuristics should be determined by the moral standard of the decision-maker.
Keywords: moral heuristics, moral decision making, prescriptive model, moral standard, moral judgment
The study of moral heuristics began in the early 1990s (Baron 1993; Messick 1993). The most cited work in this field is Cass Sunstein’s article ‘Moral Heuristics’ (2005a), where author defines moral heuristics as moral short-cuts, or rules of thumb, that generally work well, but that also lead to mistaken and even absurd moral judgments. Our article is based on the definition of heuristics proposed by Anuj Shah and Daniel Oppenheimer (2008), who define heuristics ‘as methods that use principles of effort-reduction and simplification’ (Shah, Oppenheimer 2008: 207). Accordingly, moral heuristics can be defined as methods that serve the purpose of reducing the effort associated with moral decision-making (Nadurak 2018).
We already have a fairly large volume of knowledge about the influence of heuristics on decision making and, in particular, on moral decision making (Gigerenzer 2008a, 2010; Sinnott-Armstrong et al. 2010; Fischer 2016; Nadurak 2018, 2020; Hartmann, McLaughlin 2018; Lindström 2018; Amos et al. 2019; Friedland et al. 2020; Gesang 2021, etc.). Of course, much has to be done, but now we can discuss creating a prescriptive model that would offer advice on how to use heuristics to make moral decisions.
Jonathan Baron wrote that there are three types of models in the study of judgment and decision making: normative, descriptive and prescriptive. Normative models are standards for evaluation. Descriptive model theories try to explain how people make judgments and decisions. With normative and descriptive models in hand, we can try to find ways to improve judgments according to the normative standards. The prescriptions for such correction are called prescriptive models (Baron 2012).
The descriptive model of moral heuristics is quite well elaborated by Baron (1993), Gigerenzer (2008a, b; 2010), Sunstein (2005a, b; 2008; 2010), and other researchers. As for the normative model, there is no generally accepted point of view on what is considered the normative standard for moral judgments. While there is no agreement it would be advisable to accept the idea of subjective rationality, where a person makes moral mistake when they fail to match their own standard (Pizarro, Uhlmann 2005: 558). The expediency of such a step is due to the fact that the subjectivity of moral normative standards does not deny the possibility of deviation from them (Nadurak 2020). Accordingly, the use of such a standard as a working version enables the development of descriptive and prescriptive models, since it allows us to determine when heuristics lead to mistaken decisions and to formulate prescriptions for their avoidance.
Researchers who studied moral heuristics have also tried to offer recommendations for their effective use. For example, Sunstein argued that heuristics should not be relied upon when making decisions about exotic cases and problems (Sunstein 2005a: 531). Similarly, Sinnott-Armstrong et al. argue that moral heuristics lack reliability in unusual situations, therefore, in such cases, intuition based on such heuristics should be questioned (Sinnott-Armstrong et al. 2010: 268). Gigerenzer suggests that the accuracy of heuristics depends on the structure of the environment (Gigerenzer 2008b: 41–2). Elizabeth Anderson believes that ‘successful moral deliberation uses moral heuristics flexibly as inputs to deliberation’ along with other problem-related information (Anderson 2005: 544). Robert William Fischer, analysing moral disgust as heuristic concludes that it should be trusted when, apart from the feeling of disgust, there are other reasonable reasons for a heuristic solution (Fischer 2016: 690).
The purpose of this article is to integrate the aforementioned ideas and create a prescriptive model of moral heuristics usage. According to this model, the effectiveness of a heuristic in moral decision-making depends on the problem that we solve using a heuristic, the environment in which this heuristic is used, and the moral standard of the decision-maker. In essence, this paper is a generalization of the recommendations of other researchers, as well as a generalization of examples of the use of heuristics in order to build a general prescriptive model of their usage.
Heuristics are not universal problem-solving methods. Each heuristic was formed and tested to solve some range of problems. But when a heuristic, which is adapted to solve some range of problems, is used to solve others for which its effectiveness is not established, this decision is questionable (Sunstein 2005a: 531).
So, the first advice on the effective use of heuristics can be suggested: relevant heuristics should be used to solve problem situations. By ‘relevant’ it is meant heuristic, which has proven effective for solving such problems. Consider the example of the deontological heuristic (Nadurak 2018: 141), the imitate-the-successful heuristic (Fleischhut, Gigerenzer 2013: 470), and the imitate-your-peers heuristic (Gigerenzer 2010: 545).
One of the most common heuristics is deontological norms, such as ‘do not kill’, ‘do not lie’ and so on. It should be noted that not all researchers agree that moral norms can be regarded as heuristics. For example, Gigerenzer denies such a possibility (Gigerenzer 2010: 544), however, Sunstein believes that deontological norms (although not all) can be regarded as heuristics (Sunstein 2005a, b; 2013). In this article, moral norms will be treated as heuristics, since they are consistent with the definition of heuristics that was adopted earlier, as methods that serve the purpose of reducing the effort associated with moral decision making. The moral norm acts as a heuristic when a person decides to follow it without resorting to a comprehensive analysis of the problem. For example, instead of figuring out the consequences of possible deception or its motives, one simply decides to follow the rule ‘do not lie’.
So, heuristics-norms are what Sunstein called ‘generalizations from a range of problems’ (Sunstein 2005a: 531). That is, they often (although not always) are formed as typical successful solutions for a certain range of problems: when faced with these problems people experiment with various solutions until finally, certain simple solutions have shown themselves to be effective. They spread in the community and became typical solutions to these problems. Therefore, it is logical to assert that they can be considered effective for solving those problems. For example, the heuristic rule ‘do not deceive’ was formed to solve the problem of sharing information with friendly people. However, when you take it as a universal rule and try to apply it to solve all the problems that relate to the sharing information with other people, you can find yourself in a situation of a soldier who, guided by this rule, informs the enemy about the location of his comrades. That is, he uses it not for situations of communication with friendly people for which it was formed, but for situations of communication with enemies for which there are other heuristics. Similarly, the application of this rule in sports can be inappropriate or even absurd, because deception there is often part of the game.
Another common heuristic is imitate-the-successful heuristic, which adopts the behaviour of a successful person (Fleischhut, Gigerenzer 2013: 470). This heuristic will be effective if the problem, faced by a decision-maker, is similar to that with which the successful person dealt, whose solution is available for imitation. The smaller the similarity, the more likely the errors. For example, a decision-maker knows about the case when a certain person informed a law enforcement agency about financial abuses in his company. As a result, a large criminal money-laundering scheme was exposed, and this person became known throughout the country as an example of a brave whistleblower. The decision-maker also finds himself in a situation when financial abuses occur in his company, however, their scale is insignificant, and the company’s management is not aware of the problem. Should he imitate the above-mentioned decision and report abuse to law enforcement agencies or the media? The problems are similar only at first glance, but in reality they are significantly different. First, the scale of problems is different. Second, there was no other way to cope with the problem in the first case because the company’s management was involved in the abuse. So, given the difference in problems, the decision-maker should not use the imitate-the-successful heuristic. Perhaps, in this situation, a confidential conversation with the company’s management will be a better solution.
When making moral decisions, people also often use the ‘imitate-your-peers’ heuristic: do what the majority of your peers do (Gigerenzer 2010; Fleischhut, Gigerenzer 2013). It is well known that people tend to imitate the behaviour of others and even ‘judge others’ behaviours as more moral when they are common than when they are rare in social environment’ (Lindström et al. 2018). At the same time, many bad and immoral decisions were made by imitation. It is therefore important to determine the conditions under which its use will be effective. One condition is that the problem must be the same or similar to the problem, that was successfully solved by others, whose solution is available for imitation. Consider the following example. One day person A joined the condemnation expressed by their work colleagues towards a person B who did not finish a project on time due to negligence. The next day, the person B again failed to complete the project on time, but due to objective reasons. If the person A condemns the person B, imitating yesterday’s actions of their work colleagues, person A will imitate the solution of one problem to solve another, different problem. This decision will be unsuccessful, because in the second case, unlike the first, there is no fault of the person B in the project failure.
The effectiveness of heuristics depends on the environment in which it is used. The environment is the aggregate of social, natural and cultural conditions by which the decision-maker is surrounded. Gerd Gigerenzer claims that moral behaviour results from an interaction between mind and environment, and calls this ‘ecological morality’ (Gigerenzer 2010: 540). Therefore, ‘a heuristic is not good or bad, rational or irrational; its accuracy depends on the structure of the environment’ (Gigerenzer, Gaissmaier 2011: 474), and ‘the same heuristic may lead to different outcomes, ethical or unethical, depending on the environment’ (Fleischhut, Gigerenzer 2013: 473).
For effective use of moral heuristics, it is, first of all, important to understand that they are formed and tested in a particular environment. Accordingly, it is in this environment they have an established efficacy. However, if you use them in a different environment, the result will be unknown. For example, Gigerenzer writes that the ‘imitate the majority’ heuristic is successful in relatively stable environments but not in quickly changing ones (Boyd, Richerson 2005), that ‘tit for tat’ succeeds if others also use this heuristic but can fail if otherwise…’ (Gigerenzer 2008b: 41–2). Leda Cosmides and John Tooby argue that most of moral heuristics were formed for ‘the social world in which humans evolved – a world of tiny bands peopled with a few dozen friends, relatives, and competitors’ (Cosmides, Tooby 2006: 175). But ‘the modern world, with its vast nation states peopled with millions of strangers, has little in common with the social world in which humans evolved’ (Cosmides, Tooby 2006: 175). So, if you use these heuristics in the modern cosmopolitan society, then sometimes problems arise.
Accordingly, we can formulate the following advice for the effective use of moral heuristics: heuristics should be used in the environment in which their effectiveness is established. Further, on the example of several heuristics, the relevance of this advice will be illustrated.
Deontological norms arise, undergo selection and prove their effectiveness in a particular environment and under the influence of this environment. Accordingly, it is appropriate to use them in this environment because here they have an established efficiency. However, a rule that is effective for solving a particular problem in one environment (for example, medieval Europe) will not necessarily be effective in another (modern Europe). For example, if today you try to use medieval norms that regulated the punishment of children, then you can experience not only moral condemnation but also criminal prosecution. That is, the same problem – how to punish children – in different environments is solved with the help of different norms. The norms of medieval society were formed under the influence of the environment of that time, therefore, they were acceptable for that society. However, they do not correspond to the environment of modern Europe, which determines the formation of other norms.
The efficiency of the ‘imitate-the-successful’ heuristics depends on how similar the environment is where decision-maker solves the problem to the environment, where the same problem was solved by the person, whose successful solution is available for imitation. The greater the similarity, the greater the effectiveness of such an imitation. For example, if you imitate some decision of a successful person who lived in ancient China to solve the same problem in modern Europe, then it is less likely that your decision will be successful than if someone had imitated this decision in ancient China.
The same conclusion is true for the ‘imitate the majority’ heuristic. In order for imitation to be successful, it is necessary that the environment, in which the imitation takes place, is as similar as possible to the one, in which the action that is imitated took place. If the environment is different, then the effectiveness of such an imitation is unknown. For example, imagine that a decision-maker, who is a policeman, will imitate the behaviour of his or her work colleagues in the church community to which he or she belongs. This imitation runs the risk of being unsuccessful, since the problem-solving methods used by police often differ significantly from those used in church communities.
It should also be kept in mind that the decision-maker environment also includes other people whose behaviour they can imitate. Therefore, the effectiveness of the decision also depends on who these people are. For example, if you imitate the behaviour of people who have established themselves as morally responsible persons, then such a decision is more likely to be successful than when the behaviour of street gang members is imitated.
It should be added that heuristics formed to solve certain problems in one environment can eventually be used in another environment to solve other problems and also prove to be effective. This, after all, is a normal way of developing heuristics. For example, the ‘do not harm innocent’ heuristic, which was used in relations between people, is nowadays beginning to be used effectively to solve problems related to the treatment of animals. Therefore, it does not follow from the above that heuristics formed to solve certain problems in one environment will not be effective in solving other problems in another environment. This only means that the decision-maker is more likely to succeed when he uses relevant heuristics in the environment in which their effectiveness is established.
The effectiveness of heuristics also depends on the moral standard of the decision-maker. The moral standard embodies a person’s idea of morally correct behaviour. Accordingly, the effectiveness of a moral decision depends on how much it complies with this standard. For example, if consequentialism is the moral standard for a person, then the decision is moral, as it leads to the best possible consequences. Therefore, a person must choose a heuristic that will lead to a decision that meets her moral standard.
There may be cases when the heuristic will be relevant to the problem and the environment, however, a solution based on it will be unsuccessful because it does not meet the normative standard of the person. For example, imagine a situation where a decision-maker, who is a committed Christian, is publicly insulted by another person. There is a generally recognised norm that allows him or her to demand a public apology for this insult. This norm is relevant to the problem and relevant in the society in which the decision-maker lives. However, if he or she chooses to use it, they will make a decision that contradicts their moral standard – forgive your perpetrators. Accordingly, such a decision will most likely be recognised by them as unsuccessful. Therefore, the following advice can be formulated: the choice of heuristics should be determined by the moral standard of the decision-maker. Let us look at a few examples.
The effectiveness of the imitate-the-successful heuristics depends on whether the normative standards of the decision-maker match with the standards of the person whose solution is available for imitation. For example, if a decision-maker, while adhering to a particular religious ethic, imitates the decision of a person who adheres to another religious ethic, then the probability of an unsuccessful decision increases, given the difference in the moral codes of different religions.
When using imitate-your-peers heuristic, the moral standard of the persons whose solution is available for imitation also matters. If a person imitates the behaviour of people whose moral beliefs are different, then the risk of a wrong decision is high. For example, by imitating the behaviour of consequentialists, a proponent of deontological ethics can sometimes commit actions that will be wrong in terms of his or her moral beliefs.
Gigerenzer states that one of the major heuristics underlying moral behaviour is default heuristic – ‘if there is a default, do nothing about it’ (Gigerenzer 2010: 546). The effectiveness of this heuristic also depends on the moral standard of the decision-maker. For example, traditional norms can be seen as a variation of the default heuristic in moral decision making. Of course, it is reasonable to follow the traditions, especially under uncertainty. But often traditions contradict moral beliefs, so by relying on them we can make a morally wrong decision. Therefore, using the default heuristic, it is worth checking whether the decision based on it contradicts the moral standards of the decision-maker.
The analysis of the literature on moral heuristics shows that there is a consensus among researchers on the following points. First, moral heuristics predominantly lead to successful decisions. Second, heuristics sometimes lead to mistakes. Third, in many situations, we are forced to rely on heuristics, since a complete analysis of the problem is a difficult task and sometimes impossible. Therefore, there is a need to develop a prescriptive model of using moral heuristics, which would increase the likelihood of successful decisions and reduce the likelihood of unsuccessful ones. This article proposes such a model that defines the basic conditions for the effectiveness of moral heuristics. According to it, efficiency increases if, firstly, a heuristic is relevant to the problem, secondly, it is used in the environment in which its effectiveness is established, and thirdly, it leads to a solution that meets the moral standard of the decision-maker.
It should be added that, in addition to those described in this article, perhaps there are other conditions for the effectiveness of moral heuristics. Further research should clarify this issue. Also, this article describes only those conditions for the effectiveness of moral heuristics that are universal, that is, apply to all heuristics. Each individual heuristic may also have its own specific conditions of effectiveness, which also require separate research.
It should be noted that the model proposed in this article involves the use of reasoning, which aims to determine whether a heuristic meets the conditions of effectiveness. But is this not contrary to the purpose for which people resort to heuristics – reducing the effort associated with moral decision making? In response to this posit, this reduction is acceptable as long as it does not contradict the main purpose – to make the right decision. If you need to resort to reflection in order to make the right decision, then this should be done. Heuristics have their drawbacks and to compensate for them, it is sometimes worthwhile to resort to deliberate procedures, even if it leads to increased cognitive effort.
At the same time, the use of the advice given in this article can be simplified by using them as a checklist algorithm for making heuristic decisions. This algorithm assumes that in a difficult situation, when a person does not know whether a certain heuristic will be effective, he or she may ask several questions: is this heuristic relevant to the problem, the environment, and my moral standard? Perhaps the use of such an algorithm will initially involve significant cognitive effort. However, it can be assumed that with experience, the amount of effort will decrease and the efficiency of using the algorithm will increase. Of course, one should not expect that the heuristic selection process based on the proposed algorithm will become fully automatic. However, there is every reason to expect that with practice this process will become more automatic, rapid and less effortful.
Finally, it should be added that the use of such an algorithm is not advisable in all situations, but in those where a person doubts the effectiveness of heuristics or hesitates which one to choose. In this case, one has to ‘turn on’ System 2 (Kahneman 2011) to make the right decision. Using the proposed algorithm will facilitate the reasoning process and allows one to make better decisions.
I would like to thank Andrea Weyneth for proofreading this paper. I would also like to express my gratitude to the editor and reviewers for their valuable comments and suggestions on my manuscript.
Received 4 November 2021
Accepted 23 December 2021
1. Amos, C.; Zhang, L.; Read, D. 2019. ‘Hardworking as a heuristic for moral character: Why we attribute moral values to those who work hard and its implications’, Journal of Business Ethics 158(4): 1047–1062.
2. Anderson, E. 2005. ‘Moral Heuristics: Rigid Rules or Flexible Inputs in Moral Deliberation?’, Behavioral and Brain Sciences 28(4): 544–545.
3. Baron, J. 1993. ‘Heuristics and Biases in Equity Judgments: A Utilitarian Approach’, in Psychological Perspectives on Justice: Theory and Applications, eds. A. Mellers and J. Baron. New York: Cambridge University Press, 109–137.
4. Baron, J. 2012. ‘The Point of Normative Models in Judgment and Decision Making’, Frontiers in Psychology 3: 577. Available at: https://doi.org/10.3389/fpsyg.2012.00577 (accessed 03.11.2021).
5. Boyd, R.; Richerson, P. J. 2005. The Origin and Evolution of Cultures. New York: Oxford University Press.
6. Cosmides, L.; Tooby, J. 2006. ‘Evolutionary Psychology, Moral Heuristics, and the Law’, in Heuristics and the Law (Dahlem Workshop Reports), eds. G. Gigerenzer and C. Engel. Cambridge, MA, US: MIT Press; Berlin, Germany: Dahlem University Press, 175–205.
7. Fischer, R. W. 2016. ‘Disgust as Heuristic’, Ethical Theory and Moral Practice 19(3): 679–693.
8. Fleischhut, N.; Gigerenzer, G. 2013. ‘Can Simple Heuristics Explain Moral Inconsistencies?’, in Evolution and Cognition Series. Simple Heuristics in a Social World, eds. R. Hertwig, U. Hoffrage and ABC Research Group. New York: Oxford University Press, 459–485.
9. Friedland, J.; Emich, K.; Cole, B. M. 2020. ‘Uncovering the Moral Heuristics of Altruism: A Philosophical Scale’, PloS ONE 15(3): e0229124.
10. Gesang, B. 2021. ‘Utilitarianism and Heuristics’, The Journal of Value Inquiry 55: 705–723.
11. Gigerenzer, G. 2008a. ‘Moral Intuition = Fast and Frugal Heuristics?’, in Moral Psychology, Volume 2: The Cognitive Science of Morality: Intuition and Diversity, ed. W. Sinnott-Armstrong. Cambridge: MIT Press, 1–26.
12. Gigerenzer, G. 2008b. ‘Reply to Comments’, in Moral Psychology, Volume 2: The Cognitive Science of Morality: Intuition and Diversity, ed. W. Sinnott-Armstrong. Cambridge: MIT Press, 41–46.
13. Gigerenzer, G. 2010. ‘Moral Satisficing: Rethinking Moral Behavior as Bounded Rationality’, Topics in Cognitive Science 2(3): 528–554.
14. Gigerenzer, G.; Gaissmaier, W. 2011. ‘Heuristic Decision Making’, Annual Review of Psychology 62: 451–482.
15. Hartmann, D. J.; McLaughlin, O. 2018. ‘Heuristic Patterns of Ethical Decision Making’, Journal of Empirical Research on Human Research Ethics 13(5): 561–572.
16. Kahneman, D. 2011. Thinking Fast and Slow. New York: Farrar, Straus & Giroux.
17. Lindström, B.; Jangard, S.; Selbing, I.; Olsson, A. 2018. ‘The Role of a “Common is Moral” Heuristic in the Stability and Change of Moral Norms’, Journal of Experimental Psychology: General 147(2): 228–242.
18. Messick, D. M. 1993. ‘Equality as a Decision Heuristic’, in Psychological Perspectives on Justice: Theory and Applications, eds. B. A. Mellers and J. Baron. New York: Cambridge University Press, 11–31.
19. Nadurak, V. 2018. ‘Two Types of Heuristics in Moral Decision Making’, Filosofija. Sociologija 29(3): 141–149.
20. Nadurak, V. 2020. ‘Why Moral Heuristics Can Lead to Mistaken Moral Judgments’, Kriterion – Journal of Philosophy 34(1): 99–113.
21. Pizarro, D. A.; Uhlmann, E. L. 2005. ‘Do Normative Standards Advance Our Understanding of Moral Judgment?’, Behavioral and Brain Sciences 28(4): 558–559.
22. Shah, A. K.; Oppenheimer, D. M. 2008. ‘Heuristics Made Easy: An Effort-reduction Framework’, Psychological Bulletin 134(2): 207–222.
23. Sinnott-Armstrong, W.; Young, L.; Cushman, F. 2010. ‘Moral Intuitions’, in The Moral Psychology Handbook, eds. J. M. Doris and The Moral Psychology Research Group. New York: Oxford University Press, 246–272.
24. Sunstein, C. R. 2005a. ‘Moral Heuristics’, Behavioral and Brain Sciences 28(4): 531–542.
25. Sunstein, C. R. 2005b. ‘On Moral Intuitions and Moral Heuristics: A Response’, Behavioral and Brain Sciences 28(4): 565–570.
26. Sunstein, C. R. 2008. ‘Fast, Frugal, and (Sometimes) Wrong’, in Moral Psychology, Volume 2: The Cognitive Science of Morality: Intuition and Diversity, ed. W. Sinnott-Armstrong. Cambridge: MIT Press, 27–30.
27. Sunstein, C. R. 2010. ‘Moral Heuristics and Risk’, in Emotions and Risky Technologies, ed. S. Roeser. Dordrecht: Springer, 3–16.
Santrauka
Moralinė euristika apima metodus, kuriais siekiama sumažinti su moraliniu apsisprendimu susijusias pastangas. Šio straipsnio tikslas – sukurti preskriptyvų moralinės euristikos taikymo modelį. Numatoma, kad euristikos efektyvumas priklauso nuo problemos, kurią sprendžiame taikydami euristiką, aplinkos ir sprendimus priimančiojo moralinio standarto. Taigi norint efektyviai panaudoti euristiką reikia: pirma, kad ji būtų aktuali probleminėms situacijoms; antra, euristika turėtų būti taikoma aplinkoje, kurioje ji veiksminga; trečia, euristikos pasirinkimą turėtų lemti sprendimus priimančio asmens moralinis standartas.
Raktažodžiai: moralinė euristika, moralinis apsisprendimas, preskriptyvus modelis, moralinis standartas, moralinis sprendimas