Artificial Intelligence SystemsEthics in TechnologyNews & ArticlesPrivacy & SecurityTechnology

Enabling End-User Agency and Trust in Artificial Intelligence Systems

IEEE AIS Trust and Agency Committee Explores User Agency and Trust in AI

Complex artificial intelligence systems (AIS) are increasingly finding their way into consumer contexts. Everyday users with no domain expertise are being introduced to these systems with varying degrees of transparency and understanding of the technology. Moreover, it is unclear to most how interaction with these systems may affect their lives in positive as well as negative ways. 

In an ideal world we envision that we can reap the benefits of AIS while mitigating their risks. This is why The IEEE Trust and Agency in AIS Committee (created as a committee of The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems) was formed. As we navigate the transitional AIS innovation phase, understanding and critical discussion of the role and interplay of end-user trust and agency, alongside machine agency, will play a key role in ensuring that we use these complex systems effectively, sustainably, and safely.

Four Key Questions About Trust and Agency

Trust and agency in the context of technology are discussed in various fields, yet there seems to be little agreement on definitions and methods used in trust research and practice, as well as interpretations of trust and reliance, not to mention individual differences between ‘trust seekers’ and  ‘trust skeptics.’ 

As a starting point for achieving clarity and direction, we invited members of the IEEE Trust and Agency in AIS Committee to share their views on trust and agency from their own perspectives and work. Our committee members come from different parts of the world and have different professional backgrounds. We asked four questions along with their biographies so that we can understand their views in the context of their respective perspectives. The full document will be released by the end of the year, but here is a short summary of what the authors said so far:

Question 1: Why do you think it’s so challenging to define “trust” for AI today?

Since everybody is talking about trust these days and “trust-washing” is a thing now, we really wanted to know what trust is all about, to delve a bit deeper into why it is so hotly debated. In our committee meetings, we explored components of trust, discussing trust being an attitude, trust being goal-directed, and the role of vulnerability to betrayal. Often, these discussions then led to discussions about trustworthiness. 

Trustworthiness is distinct from trust as it refers to attributes of machine trustworthiness depending on performance-based attributes (i.e., how good is the product), process characteristics (i.e., how understandable is the system to an operator) and benevolence, which is a purpose-based attribute referring to the intent of the designers or asking why a system was built. As you will see, the answers of our experts are diverse and each may contribute to a better overall understanding of these questions. 

What about you? What comes to your mind when you think about trust? Trust in humans? Trust in machines? Do you have a perspective we have not seen in the answers below?

Let us know your thoughts by emailing marisa.tschopp@scip.ch.

Question 2: Why do you think it’s so challenging to define “agency” for AI today?

Researchers, practitioners, and ethicists debate passionately about this topic, and you don’t typically get “flamed” when talking about trust in human-AI interaction. Maybe it does not exist, maybe trust is misplaced, maybe it is all anthropomorphism, we don’t know. Yet, trusting a machine means giving it agency and giving up some of our own human agency. When and how do we have “veto-rights” when a machine is making decisions? 

We theorize and hope that enabling user agency as a means of control could be one way to address this question. This is why our committee decided to focus on agency instead of restricting ourselves to the “hot topic” of trust. By reading the answers to this question, you will see an even broader interpretation of the notion of agency in the context of AI. Some talk more about technical machine agency, while others focus more on the agency of humans, and still others on the tension between the two. 

What is your initial thought? Do you associate “agency” with the power of machines or the power of humans? 

Let us know your thoughts by emailing marisa.tschopp@scip.ch.

Question 3: What are the biggest needs or challenges for businesses and policymakers around defining “trust” and “agency” for AI and why?

Most people will never understand how AIS works. However, some experts do understand AIS and one can ask them for help and advice. Furthermore, more and more laws are now also protecting users from negative aspects of AIS, such as legislation ensuring aspects of data privacy. While it is impractical and impossible to educate everyone on the details of AIS, we can strive to provide greater literacy on the basics and mechanics of AIS so that it can foster informed use rather than blind trust.

There is consensus that humans “should” have the power to shape how they use technology. But this ideal is undermined when systems are built to purposely interfere with rational decision-making by humans, for instance, through addictive or anthropomorphic design features or over-hyped marketing. Finding the right words and creating the ability to reach and “communicate” in order to bring people together will remain a huge challenge. However, we believe by enabling end-user agency as a standard for designing and developing products, we will give people the opportunity to develop skills to enhance their freedom and use technology in a way they truly desire. 

Have you encountered challenges in your business or government similar or different from the experiences of the members?

Let us know your thoughts by emailing marisa.tschopp@scip.ch.

Question 4: What’s the win here?  Please provide a portrait of society in five years where “trust” and “agency” for AI are as safe, innovative, and beneficial as possible starting now in 2020.

Admittedly, predictions are risky. But this section is more about vision than about forecasting. The members in our committee agree upon the fact that we are experiencing great change at an unprecedented pace. Uncertainty and skepticism are growing. Real-world use of the word trust is soaring in design guidelines, advertising, image campaigns and the codes of ethics of tech firms, banks and other AI start-ups. But it is unclear where all these efforts are headed, and how we would know when we achieved success.

The visions of our experts tend to have a similar narrative: A beautiful world, with kind people and great tools to help other kind people. We do encourage you to read their views, and also think about how their stories may affect you and your dreams in such a way that life, work, technology, and other relationships come together to make the world a better place.

Let us know your thoughts by emailing marisa.tschopp@scip.ch.

Get Engaged With Us

The IEEE Trust and Agency Committee aims to support the development of successful transdisciplinarity around AIS, taking into account the practicalities of technological development alongside sublime notions of human agency. We want to stimulate a constructive, critical, and respectful discussion around topics of trust and agency as a way of moving forward with emerging intelligent technologies. We want to continue doing so as long as it benefits humanity. 

If you are inspired or curious or critical about our work, please contact us to join the community.


Authors:

  • Marisa Tschopp, Co-Chair, IEEE AIS Trust and Agency Committee
  • S. Shyam Sundar, Co-Chair, IEEE AIS Trust and Agency Committee
Show More

Guest Contributor

Beyond Standards features contributions from IEEE SA’s global network of volunteers, members, staff, and partners, serving as a trusted source of information, education, and inspiration for industry, government, academia, and the public.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button