INTEGRATING MORALITY INTO INTELLIGENT MACHINES – CAN ARTIFICIAL INTELLIGENCE MAKE UNSUPERVISED MORAL DECISIONS?
Abstract
With the expansion of artificial intelligence and advanced technologies, the
world in the 21st century is rapidly changing and imposing new living dynamics. Although
such changes affect all age groups, younger generations accept them faster and
react more positively. The new cohorts - Generation Z and Alpha - live in a digital world
that affect their lifestyle, interpersonal relations, quality of mental health, psychological
well-being and everyday challenges. The presence of the so called “Frankenstein effect”
in some adults provoked by the fast development of artificial intelligence and robotics,
reflects a “humans versus machines” position, viewing artificial intelligence as a
threat to humanity. However, the reality is that digital and human world are not in
conflict, since many people are already using artificial intelligence tools on daily basis.
It is implemented in certain aspects of medicine, education, business, law, agriculture,
industry, space technology and in many other fields. With this in mind, the aspect of
morality pops up as a very important one. A frequently asked question is: Does artificial
intelligence have the capacity to make moral decisions independently? Therefore, integrating
morality into AI algorithms is one of the priorities that interdisciplinary teams
from engineering and computer sciences, psychology, philosophy, sociology, law, etc.
are intensively working on. This paper addresses this issue by presenting findings from
relevant research which discusses the challenges and possibilities for integrating the
dimensions of morality and introducing human values into autonomous systems that
perform complex tasks.
Downloads
Copyright (c) 2024 Ana Frichand
This work is licensed under a Creative Commons Attribution 4.0 International License.