By Denis Larrivee and Luis Echarte
Introduction
Affective, often autonomous, computational and robotic artifacts
constitute a rapidly growing sector of artificial intelligence application [1].
Deployed for use in a diversity of socially interactive domains as companions,
health assistants, and elder and child caretakers such systems are able to tap
into the affective system of humans; accordingly, they have both the potential
to support, but also to benefit from, interactions that are structured [2].
Such artifacts raise specific anthropological and ethical issues related to
human flourishing, dignity, and autonomy and, by extension, to the social
structures flowing from human attachments. Many such systems now avail
neuroscientific knowledge that is specifically designed to appeal to underlying
neural correlates that evoke interactive responsivity. Among neural correlates
yet to be considered are processes that are capable of inferring human
intentions, a human ability whose neural correlates are designated theory of
mind. Theory of mind is a generalized class of neural structures - e.g., mirror
neurons - domains, and operations that draw intentional inferences from the
actions of others [3], capacities that have facilitated evolutionary and
cultural progress in human social performance. Combined with affective
abilities, the duplication of these capacities in autonomous machines that are
purposed to enhancing social interaction with humans, has the pragmatic outcome
of broader anthropological assimilation. Such assimilation represents a
widening incursion of ontological equivalence that relativizes anthropological
meta-reality through technology. Accordingly, the evolution of AI/robotic
artefacts through a simulated appropriation of human social and computational
capacities constitutes a multi-dimensional influence, affecting not only
social, legal and ethical praxis but also how these influences are articulated
through evolving conceptions of technology's relation to anthropology. This
poster explores the role of the growing intersection of neuroscience and
technology in affective/intentional AI/robotic artefacts as a source for
articulating an evolving anthropology-technology relation through legal and
ethical praxis.
1. Is Technology Meant to
Execute or to Share our Goals?
The use of technology has always had a close association with the
human being. Impressed with a telos that is externally imposed technology is
necessarily bound to the human being by its capacity to extend human ability.
Technology’s proximity to human beings, however, was the subject of Heidegger’s
critique where humans were themselves made subject to the manipulation of the
technology they had created [4]. Heidegger’s concern is echoed in the recently
released document Ethically Aligned Design [1] on the prospects, but also the
dangers, posed by artificial intelligence technology. Accordingly, the document
extends the notion of progress to include not merely the enhancement of
computational or related abilities but also to align with human moral values to
allow an elevated level of trust to permeate their interaction.
To fully benefit
from the potential of AI and Autonomous Systems, we need to… go beyond… more computational
power… make sure they are aligned to human ethical principles… to elevate trust
between Humans and technology….’
Hence, trust is a crucial ethical variable and flows only from
alignment with human value and well being. However, the development of ‘trust’
in technology that will benefit human flourishing is likely to be challenged by
autonomous systems designed to elicit affective interaction.
2. Social capacities: A Dual
Outcome Challenge
The remarkable social capabilities of humans have evolved to enhance
group and cultural advance, features that require extended intervals for neural
development [5]. Increased knowledge of social neuroscience has propelled advances
in psychiatric care [6] , but has also yielded an abundance of knowledge on
neural features that could be appropriated for social interaction through
responsive AI device systems. Such devices, in fact, either have been implemented
or are in advanced stages of development [2]. Their development raises at least
two ethical metaconcerns. First, as Heidegger pointed out, they risk a misrepresentation of ontology, which is to say they introduce risk through the perception of being human, and the influence on ethical and juridical structure. Second, they risk ‘trust’, a risk flowing from the absence of value paradigms oriented to human well being. For example, in the absence of such moral framing, such systems are open to autonomous and intentionally deceptive actions, a circumstance likely to be exacerbated as knowledge of social neuroscience grows (Table 1).
3. The Challenge of Theory of Mind.
Affective computational and robotic technologies capable of sensing modeling,
or exhibiting affective behavior by means of emotions, moods, personality are
especially likely to elicit trust on the part of human partners. Such capacities can be combined with abilities for inferring human intentions, like that now studied in social neuroscience termed Theory of Mind [3]. These capacities are instantiated by a unique group of cells, e.g. mirror neurons, circuits, eg., fronto parietal, and subdomains, e.g. temporo-parietal junction, that are instrumental in their ability to facilitate group and social interaction. Their likely consideration or even appropriation in responsive AI systems design can be expected to further erode distinctions with human abilities and exacerbate the sort of dual challenge risks already introduced by intentionally affective devices.
4. Embedding AI Artefacts in
Social Structure: Themes.
The embedding of AI artefacts in social order is intentional and pervasive, engaging a confluence of social and engineering disciplines. Though marked by multiple motivations, a consensus posits that humanoid simulation – appearing in multiple guises (Table 2) - will enable more rapid and adaptive appropriation. Campa, for example, identifies two key concepts that inform design efforts directed to the mimicking human features, scenario, narratives to achieve goals, and persona, concrete actors in the narratives [7]. Campa’s identification of simulated properties, however, does not exhaust the themes employed for assimilation. Beyond simulation, efforts aimed at broadening AI repertoires seek to achieve cognitive and computational properties that more closely resemble those of humans to facilitate conceptual exchange with artefacts in real world, joint tasking [8].
5. Ethical Meta-Principles for
Enlightened AI Implementation: Relating Ontology to Praxis
Heidegger’s critique of technology specifically addressed the lack of
transparency concerned with technology’s impact on the human being [4,9].
Current attempts to frame the ethical issue (Ethically Aligned Design Document)
situate the lack of transparency in the context of manipulative self interest
and conflicts of interest. While serious issues, Heidegger’s reading also
concerns the subtle issue wherein the order of being fails to conciliate with
the reality of the technology [4]. This ontological reconfiguration elicits a
human affective investment not coincident with the nature of the artefact.
6. Current Efforts to Expand
Value Paradigms in RoboEthics.
The sophistication and complexity of AI/Robotic devices, and their
potential for acquiring more advanced functionalities, led to the early
recognition not only of the benefit of extending human capability, but also of
inflicting harm. Accordingly, initial ethical statements were framed in terms
of meta principles that clearly asserted human control and safety, e.g, the
frequently cited Asimov Laws (Table 3). Social assimilation and joint tasking
roles, however, have been the stimulus for consensus based approaches that
capitalize on stakeholder contribution [10]. Rather than presuppose a human normative standard, value is defined in terms of utility and a median position among competing interests. Improved autonomous capabilities, moreover, have stimulated value derivation paradigms premised on functionalist notions of property parity rather than ontological distinction. For example, notions of whether robot labor constitutes servitude emerge from value models that equate cognitive performance with ontological parity [11].
7. Human Rights? Or New Legal
Philosophies.
The development of devices with capabilities for human interaction at
the scale of affectivity and intention suggests that fundamentally new
relations between technology and humans will be structured, with new
susceptibilities to ontological misrepresentation and affective exploitation.
Which legal philosophies best analogize these new circumstances and how can
they best be used to enhance human flourishing? What interactions ought to be
governed by statutory provision? In recognition of their enhanced autonomous
capability The European Union’s RoboLaw Project identified the notion of
accountability gaps emerging in liability definitions [12] (Table 4). For
affective artefacts liability concerns mark a special sphere of interactive
scenarios relating to disclosure, as well as emotive association and the notion
of harm type. However, as an extension of ethical value, legal philosophy and
praxis can also be expected to adjust to shifting value paradigms, accommodating
a redistribution of value investment. Consensus ethics and the ontological
functionalizing of anthropology have been taken up in new metaphorical notions
that have shaped legal praxis [13]. Moreover, human rights can be expected to
undergo similar redistribution, invested in artefacts in manners That parallel
egalitarian actor network philosophies that have emerged from eco-ethics models
[14].
References
[1] Ethically Aligned Design (2016) A Vision for Prioritizing Human
Wellbeing with Artificial Intelligence and Autonomous Systems. IEEE Global
Initiative http://standards.ieee.org/develop/indconn/ec/ead_v1.pdf.
[2] Sabonovic S (2014) Inventing Japan’s robotics culture: the
repeated assembly of science, technology, and culture in social robotics. Social
Studies Science 44(3):342-367.
[3] RizzolattiG (2010) Mirror neurons: from discovery to autism. Exp
Brain Res 200(3):223-237.
[4] Onishi B (2010) Information, bodies, and Heidegger: tracing
visions of the posthuman. Sophia 50:101-112.
[5] Decety J, Cowell JM (2014) Friends or foes: is empathy necessary
for moral behavior. Perspectives Psychol 9(5):525-537.
[6] Cacioppo JT, Cacioppo S, Dulawa S, Palmer AA (2014) Social
neuroscience and its potential contribution to psychiatry World Psych
13:131-139.
[7] Campa R (2016) The rise of social robots: a review of the recent
literature. J Evolution Tech 26(1):106-113.
[8] Lin et al (2011) Robot ethics: Mapping the issues for a mechanized
world. Artificial Intel 175:942-949.
[9] Rae G (2014) Being and technology: Heidegger on the overcoming of
metaphysics. J British Soc Phenom 43(3):305-325.
[10] Rae G (2014) Heidegger’s influence on posthumanism: the
destruction of metaphysics, technology, and the overcoming of anthropocentrism.
His Human Sci 27(1):51-69.
[11] Stahl BC, Coekelbergh (2016) Ethics of healthcare robotics:
towards responsible research and innovation. Robotics Auto Sys 86:152-161.
[12] Petersen S (2007) The ethics of robot servitude. J Exper Theor
Artificial Intel 19(1):43-54.
[13] The RoboLaw Project. Regulating emerging robotic technologies in
Europe: robotics facing law and ethics. (2014). www.robolaw.eu
[14] Calo R (2016) Robots as legal metaphors. Harvard J Law Tech
30(1):209-237.
[15] Chandler D (2013) The world of attachment: the post-humanist challenge
to freedom and necessity Millennium J Inter Studies 41(3):516-534.
Many interesting thoughts here. With regard to the following paragraph there are some questions:
ReplyDelete"Heidegger’s critique of technology specifically addressed the lack of transparency concerned with technology’s impact on the human being [4,9]. Current attempts to frame the ethical issue (Ethically Aligned Design Document) situate the lack of transparency in the context of manipulative self interest and conflicts of interest. While serious issues, Heidegger’s reading also concerns the subtle issue wherein the order of being fails to conciliate with the reality of the technology [4]."
It's a bit unclear whether Heidegger _specifically addressed the notion of transparency_, and what such a very contemporary notion (transparency) might mean in the context of his thought. In so far as Heidegger's texts are enlisted to support such a point, it would be interesting to have references to "chapter and verse" here. The sentence-opener "While serious issues" is ambiguous and possibly a typo.
A third, and more comprehensive question relates to the _use_ of of terms such as "manipulative self-interest" to describe Heidegger's critique. If the authors here are working with Heidegger's notion of Gestell and instrumental reason, it isn't clear in what sense Heidegger held such thought to be morally deficient. Again a precise reference would be good.
Take care. -tor
Thank you for your interest. I hope the following addresses your comments.
Delete(1) The statement is derivatory and taken from Brad Onishi's article (ref. 4).
(2) The opener is indeed ambiguous and is meant to reflect ethical questions now raised about the evolution of such devices, not to bridge with Heidegger's notion.
(3) The term 'manipulative self interest' is a contemporary reading (Ethically Aligned Document) on the possibilities of device '(mis)performance' and relates to the 'transparency' of technology application or reliability; i.e., in the context of (mis)placing trust/confidence in the technology's performance/aims.
Thank you. It *does* clarify matters somewhat: you enlist Heidegger specifically to ask questions regarding the relation of being to technology, and *not* so much in respect to what we can loosely call "instrumental reason" (except, of course, where these concerns overlap).
DeleteIt is as if Heidegger had himself been made aware, ahead of time, of our current usage in operating digital machinery -- we do not store, but save, files -- when he noted in his last interview that "only a god can save us." Take care. -tor
Yes. Thank you - Denis.
Delete