A lesson from AI: Ethics is not an imitation game
Loading...
Official URL
Full text at PDC
Publication date
2022
Advisors (or tutors)
Editors
Journal Title
Journal ISSN
Volume Title
Publisher
Institute of Electrical and Electronics Engineers (IEEE)
Citation
Genova, G., Pelayo, V. M., & Martin, M. R. G. (2022). A Lesson from AI: Ethics Is Not an Imitation Game. IEEE Technology and Society Magazine, 41(1), 75-81. https://doi.org/10.1109/MTS.2022.3147531
Abstract
How can we teach moral standards of behavior to a machine? One of the most common warnings against AI is the need to avoid bias in ethically-loaded decision-making, even if the population on which the learning is based is itself biased. This is especially relevant when we consider that equity (and protection of minorities) is an ethical notion that by itself goes beyond the (most probably) biased opinions of people: equity must be pursued and ensured by social structures, regardless of whether people agree or not. We know (believe?) that bias, or being biased, is a bad thing, regardless of what the majority says. In other words, good and evil is not what the majority says, it is beyond majorities and mathematical formulae. Ethics cannot be based on a majority opinion about right and wrong or on a rigid code of conduct. We need to overcome the generalized skepticism in our society about the rationality of ethics and values. The good news is AI is forcing us to think ethics in a new way. The attempt to formalize ethics in a set of rules misses the point that a person is not only an instance of a case, but a unique and unrepeatable being. Ethics should prevent us from the error of converting equity into mathematical equality, achieved through the extraction of characteristics and the computation of a value formula. Equity is not mathematical equality, not even a weighted equality that considers different factors.
The title of the 2014 movie The Imitation Game tells us the life of Alan Turing, especially his outstanding participation in the decipherment of the German messages encrypted with the Enigma machine in the Bletchley Park Complex [1] , [2] . The expression “the imitation game” is from Turing himself: these are the first words of his 1950 article, Computing Machinery and Intelligence [3] . It is also the name of a game played by the Victorian aristocracy, which consisted in a blind exchange of handwritten messages to try to guess whether the interlocutor was a woman or a man.
Description
This work was supported in part by the RESTART Project—“Continuous Reverse Engineering for Software Product Lines/Ingeniería Inversa Continua para Líneas de Productos de Software” under Grant RTI2018-099915-B-I00 and the Convocatoria Proyectos de I+D Retos Investigación del Programa Estatal de I+D+i Orientada a los Retos de la Sociedad 2018 under Grant 412122; in part by the ECSEL18 Project “NewControl” (Project 6221/31/2018) and its National PCI under Grant 449990; and in part by the CritiRed Project—“Elaboración de un modelo predictivo para el desarrollo del pensamiento crítico en el uso de las redes sociales,” Convocatoria Retos de Investigación del Ministerio de Ciencia, Innovación y Universidades (2019–2022), under Grant RTI2018-095740-B-I00.
Referencias bibliográficas:
• F.H. Hinsley, A. Stripp (eds.), Codebreakers: The inside story of Bletchley Park, Oxford: Oxford University Press, 1993.
• A. Hodges, Alan Turing: The enigma, London: Burnett Books, 1983.
• A.M. Turing, “Computing Machinery and Intelligence”. Mind 59:433-460, 1950.
• D. Vanderelst, A. Winfield, “An architecture for ethical robots inspired by the simulation theory of cognition”. Cognitive Systems Research 48:56–66, 2018.
• S.L. Anderson, “The unacceptability of Asimov’s three laws of robotics as a basis for machine ethics”. In M. Anderson & S. L. Anderson (eds.), Machine ethics, pp. 285–296. Cambridge: Cambridge University Press, 2011.
• V. Nallur, “Landscape of Machine Implemented Ethics”. Science and Engineering Ethics 26(5):2381–2399, 2020.
• S. Lumbreras, “The Limits of Machine Ethics”. Religions 8:100, 2017.
• J. Torresen, “A Review of Future and Ethical Perspectives of Robotics and AI”, Frontiers in Robotics and AI 4:75, 2018.
• E. Awad, S. Dsouza, R. Kim, J. Schulz, J. Henrich, A. Shariff, J.-F. Bonnefon, I. Rahwan, “The Moral Machine experiment”. Nature 563:59–64, 2018. See also the online platform for the moral machine experiment, available: https://www.moralmachine.net/.
• C.M. Bishop, Pattern Recognition and Machine Learning, New York: Springer, 2006.
• J. Ober, “Democracy’s Wisdom: An Aristotelian Middle Way for Collective Judgment”. American Political Science Review 107(1):104-122, 2013.
• C. Leonard, “Teaching ethics to machines”, 2016. [Online]. Available: https://www.linkedin.com/pulse/teaching-ethicsmachines-charles-leonard.
• B. Gert, J. Gert, “The Definition of Morality”, The Stanford Encyclopedia of Philosophy. [Online]. Available: https://plato.stanford.edu/archives/fall2020/entries/moralitydefinition.
• A.M. Nascimento, L.F. Vismari, A.C.M. Queiroz, P.S. Cugnasca, J.B. Camargo Jr., J.R. de Almeida Jr., “The Moral Machine: Is It Moral?”. 2nd International Workshop on Artificial Intelligence Safety Engineering (WAISE 2019), within 38th International Conference on Computer Safety, Reliability, and Security (SAFECOMP), September 10-13, 2019, Turku, Finland. Lecture Notes in Computer Science, vol. 11699, pp. 405-410.
• A.E. Jaques, “Why the moral machine is a monster”. We Robot Conference, University of Miami School of Law, April 11-13, 2019.
• H. Etienne, “When AI Ethics Goes Astray: A Case Study of Autonomous Vehicles”. Social Science Computer Review 40(1):1-11, 2020.
• A. Puri, “Moral Imitation: Can an Algorithm Really Be Ethical?” Rutgers Law Record 48:47-58, 2020.
• P. Bremner, L.A. Dennis, M. Fisher, A.F. Winfield, “On Proactive, Transparent, and Verifiable Ethical Reasoning for Robots”. Proceedings of the IEEE 107(3):541-561, Special Issue on Machine Ethics: The Design and Governance of Ethical AI and Autonomous Systems, 2019.
• G. Génova, M.R. González Martín, “Teaching Ethics to Engineers: A Socratic Experience”. Science and Engineering Ethics 22(2):567-580, 2016.
• K. Bogosian, “Implementation of moral uncertainty in intelligent machines”. Minds and Machines 27(4):591-608, 2017.
• J. May, C. Workman, J. Haas, H. Han, “The Neuroscience of Moral Judgment: Empirical and Philosophical Developments”. In F. de Brigard, W. Sinnott-Armstrong (eds.), Neuroscience and Philosophy. MIT Press (forthcoming).
• S. Sismondo, “Post-truth?”. Social Studies of Science 47(1):3-6.
• D.G. Johnson, Computer Ethics, 2nd Ed, Upper Saddle River, NJ: Prentice Hall, 1994.
• K.C. Laudon, “Ethical Concepts and Information Technology”. Communications of the ACM 38(12):33–39, 1995.
• G. Génova, M.R. González Martín, A. Fraga, “Ethical education in software engineering: responsibility in the production of complex systems”. Science and Engineering Ethics 13(4):505-522, 2007.