Aviso: para depositar documentos, por favor, inicia sesión e identifícate con tu cuenta de correo institucional de la UCM con el botón MI CUENTA UCM. No emplees la opción AUTENTICACIÓN CON CONTRASEÑA
 

Machine ethics: do androids dream of being good people?

Loading...
Thumbnail Image

Full text at PDC

Publication date

2023

Advisors (or tutors)

Editors

Journal Title

Journal ISSN

Volume Title

Publisher

Springer
Citations
Google Scholar

Citation

Génova, G., Moreno, V. & González, M.R. Machine Ethics: Do Androids Dream of Being Good People?. Sci Eng Ethics 29, 10 (2023). https://doi.org/10.1007/s11948-023-00433-5

Abstract

Is ethics a computable function? Can machines learn ethics like humans do? If teaching consists in no more than programming, training, indoctrinating… and if ethics is merely following a code of conduct, then yes, we can teach ethics to algorithmic machines. But if ethics is not merely about following a code of conduct or about imitating the behavior of others, then an approach based on computing outcomes, and on the reduction of ethics to the compilation and application of a set of rules, either a priori or learned, misses the point. Our intention is not to solve the technical problem of machine ethics, but to learn something about human ethics, and its rationality, by refecting on the ethics that can and should be implemented in machines. Any machine ethics implementation will have to face a number of fundamental or conceptual problems, which in the end refer to philosophical questions, such as: what is a human being (or more generally, what is a worthy being); what is human intentional acting; and how are intentional actions and their consequences morally evaluated. We are convinced that a proper understanding of ethical issues in AI can teach us something valuable about ourselves, and what it means to lead a free and responsible ethical life, that is, being good people beyond merely “following a moral code”. In the end we believe that rationality must be seen to involve more than just computing, and that value rationality is beyond numbers. Such an understanding is a required step to recovering a renewed rationality of ethics, one that is urgently needed in our highly technifed society

Research Projects

Organizational Units

Journal Issue

Description

This work has been supported by the Madrid Government (Comunidad de Madrid-Spain) under the terms of the Multi-Annual Agreement with UC3M in the line of Excellence of University Professors (EPUC3M17), and in the context of the V PRICIT (Regional Programme of Research and Technological Innovation). . This research has received funding also from the RESTART project – “Continuous Reverse Engineering for Software Product Lines / Ingeniería Inversa Continua para Líneas de Productos de Software” . Referencias bibliográficas: • Alfonseca, M., Cebrián, M., Fernández Anta, A., Coviello, L., Abeliuk, A., & Rahwan, I. (2021). Superintelligence cannot be contained: Lessons from computability theory. Journal of Artificial Intelligence Research, 70, 65–76. DOI: 10.1613/jair.1.12202 • Allen, C., Smit, I., & Wallach, W. (2005). Artificial morality: Top-down, bottom-up, and hybrid approaches. Ethics and Information Technology, 7(3), 149–155. DOI: 10.1007/s10676-006-0004-4 • Anderson, M., Anderson, S.L., & Armen, C. (2004). Towards machine ethics. In Proceedings of the AAAI workshop on agent organization: Theory and practice. AAAI Press. • Anderson, M., & Anderson, S. L. (Eds.). (2011). Machine ethics. Cambridge University Press. • Anderson, S. L. (2011). The unacceptability of Asimov’s three laws of robotics as a basis for machine ethics. In M. Anderson & S. L. Anderson (Eds.), Machine ethics (pp. 285–296). Cambridge University Press. DOI: 10.1017/CBO9780511978036.021 • Anscombe, G. E. M. (1958). Intention. Basil Blackwell. • Asimov, I. (1942). Runaround. Astounding Science Fiction, 29(1), 93–103. • Awad, E., Dsouza, S., Kim, R., Schulz, J., Henrich, J., Shariff, A., Bonnefon, J.-F., & Rahwan, I. (2018). The moral machine experiment. Nature, 563, 59–64. DOI: 10.1038/s41586-018-0637-6 • Boden, M., Bryson, J., Caldwell, D., Dautenhahn, K., Edwards, L., Kember, S., Newman, P., Parry, V., Pegman, G., Rodden, T., Sorrell, T., Wallis, M., Whitby, B., & Winfield, A. (2017). Principles of robotics: Regulating robots in the real world. Connection Science, 29(2), 124–129. DOI: 10.1080/09540091.2016.1271400 • Bonnefon, J.-F., Shariff, A., & Rahwan, I. (2016). The social dilemma of autonomous vehicles. Science, 352, 1573–1576. DOI: 10.1126/science.aaf2654 • Bremner, P., Dennis, L. A., Fisher, M., & Winfield, A. F. (2019). On proactive, transparent, and verifiable ethical reasoning for robots. Proceedings of the IEEE, 107(3), 541–561. DOI: 10.1109/JPROC.2019.2898267 • Bryson, J. J. (2010). Robots should be slaves. In Y. Wilks (Ed.), Close engagements with artificial companions: Key social, psychological, ethical and design issue (pp. 63–74). John Benjamins. DOI: 10.1075/nlp.8.11bry • Bryson, J. J. (2019). Six kinds of explanation for AI (one is useless). Adventures in NI. Available at https://joanna-bryson.blogspot.com/2019a/09/six-kinds-of-explanation-for-ai-one-is.html • Bryson, J. J., Diamantis, M., & Grant, T. (2017). Of, for, and by the people: The legal Lacuna of synthetic persons. Artificial Intelligence and Law, 25(3), 273–291. DOI: 10.1007/s10506-017-9214-9 • Coeckelbergh, M. (2020). AI ethics. MIT Press. DOI: 10.7551/mitpress/12549.001.0001 • Dreyfus, H. L. (1972). What computers can’t do: The limits of artificial intelligence. Harper and Row. • Etienne, H. (2020). When AI ethics goes astray: A case study of autonomous vehicles. Social Science Computer Review, 40(1), 1–11. • Foot, P. (1967). The problem of abortion and the doctrine of the double effect. In Virtues and vices and other essays in moral philosophy. Basil Blackwell, 1978 (originally published in the Oxford Review, No. 5, 1967). • Gert, B., & Gert, J. (2020). The definition of morality. In Edward N. Zalta (Ed.), The Stanford encyclopedia of philosophy, Fall 2020 edition. https://plato.stanford.edu/archives/fall2020/entries/morality-definition • Génova, G., & González, M. R. (2016). Teaching ethics to engineers: A socratic experience. Science and Engineering Ethics, 22(2), 567–580. DOI: 10.1007/s11948-015-9661-1 • Génova, G., & González, M. R. (2017). Educational encounters of the third kind. Science and Engineering Ethics, 23(6), 1791–1800. DOI: 10.1007/s11948-016-9852-4 • Génova, G., González, M. R., & Moreno, V. (2022). A lesson from AI: Ethics is not an imitation game. IEEE Technology and Society Magazine, 41(1), 75–81. DOI: 10.1109/MTS.2022.3147531 • Génova, G., & Quintanilla Navarro, I. (2018). Are human beings humean robots? Journal of Experimental & Theoretical Artificial Intelligence, 30(1), 177–186. DOI: 10.1080/0952813X.2017.1409279 • Holstein, T., Dodig-Crnkovic, G., & Pelliccione, P. (2021). Steps toward real-world ethics for self-driving cars: Beyond the trolley problem. In S. J. Thompson (Ed.), Machine law, ethics, and morality in the age of artificial intelligence. IGI Global. • IEEE. (2019). The IEEE global initiative on ethics of autonomous and intelligent systems. In Ethically aligned design: A vision for prioritizing human well-being with autonomous and intelligent systems, First Edition. https://standards.ieee.org/content/ieee-standards/en/industry-connections/ec/autonomous-systems.html • Jaques, A. E. (2019). Why the moral machine is a monster. In We robot conference. University of Miami School of Law, April 11–13, 2019c. • Lin, P., Abney, K., & Bekey, G. A. (Eds.). (2012). Robot ethics. The ethical and social implications of robotics. The MIT Press. • Lumbreras, S. (2017). The limits of machine ethics. Religions, 8, 100. DOI: 10.3390/rel8050100 • MacIntyre, A. (1981). After virtue: A study in moral theory, 2nd ed. 1984, 3rd ed. 2007. University of Notre Dame Press. • Nallur, V. (2020). Landscape of machine implemented ethics. Science and Engineering Ethics, 26(5), 2381–2399. DOI: 10.1007/s11948-020-00236-y • Nascimento, A. M., Vismari, L. F., Queiroz, A. C. M., Cugnasca, P. S., Camargo Jr., J. B., & de Almeida Jr., J. R. (2019). The moral machine: Is it moral? In 2nd International workshop on artificial intelligence safety engineering (WAISE 2019d), within 38th international conference on computer safety, reliability, and security (SAFECOMP), September 10–13, 2019d, Turku, Finland. Lecture notes in computer science (Vol. 11699, pp. 405–410). • Nevejans, N. (2016). European civil law rules in robotics. European Parliament, Directorate-General for Internal Policies, Policy Department C: Citizens’ Rights and Constitutional Affairs. • Northcott, R. (2019). Free will is not a testable hypothesis. Erkenntnis, 84, 617–631. DOI: 10.1007/s10670-018-9974-y • Puri, A. (2020). Moral imitation: Can an algorithm really be ethical? Rutgers Law Record, 48, 47–58. • Rice, H. G. (1953). Classes of recursively enumerable sets and their decision problems. Transactions of the American Mathematical Society, 74(2), 358–366. DOI: 10.1090/S0002-9947-1953-0053041-6 • Schmidt, J. A. (2014). Changing the paradigm for engineering ethics. Science and Engineering Ethics, 20(4), 985–1010. DOI: 10.1007/s11948-013-9491-y • Segun, S. T. (2021). Critically engaging the ethics of AI for a global audience. Ethics and Information Technology, 23, 99–105. DOI: 10.1007/s10676-020-09570-y • Spaemann, R. (1982 [1989]). Moralische Grundbegriffe. C.H. Beck [Basic moral concepts, translated by T.J. Armstrong. Routledge]. • Thomson, J. J. (1976). Killing, letting die, and the trolley problem. The Monist, 59(2), 204–217. DOI: 10.5840/monist197659224 • Torresen, J. (2018). A review of future and ethical perspectives of robotics and AI. Frontiers in Robotics and AI, 4, 75. DOI: 10.3389/frobt.2017.00075 • Turing, A. M. (1950). Computing machinery and intelligence. Mind, 59(433–460), 1950. • Waldrop, M. M. (1987). A question of responsibility. AI Magazine, 8(1), 28–39. • Wallach, W., & Allen, C. (2009). Moral machines: Teaching robots right from wrong. Oxford University Press. DOI: 10.1093/acprof:oso/9780195374049.001.0001 • Winfield, A. F., Michael, K., Pitt, J., & Evers, V. (2019). Scanning the issue. In Proceedings of the IEEE (Vol. 107, No. 3, pp. 509–517). Special Issue on Machine Ethics: The Design and Governance of Ethical AI and Autonomous Systems.

Keywords

Collections