ArticleDetailDownload PDF

The AI research community is moving towards Human-Centred Artificial Intelligence (HCAI), focusing on enhancing human performance, making systems reliable, safe, and trustworthy. Professor Ben Shneiderman, from the University of Maryland, makes an exciting shift from previous thinking with 15 recommendations for bridging the gap between the ethical principles of HCAI and the practical steps for its effective governance. By taking a human-centred approach, he is opening doors to more reliable applications, enabling designers to translate ethical principles into professional practices, managers to create safety cultures in their companies, and government agency staff to establish effective policies.

Artificial Intelligence (AI) has applications in many domains including healthcare, education, cybersecurity, and environmental protection. With this comes high expectations of its associated benefits. While there is resistance to change, the AI research community is moving towards Human-Centred Artificial Intelligence (HCAI) to calm fears of out-of-control robots, clarify responsibility for failures, and reduce biased decision making that leads to unfair treatment of minority groups. HCAI will also help to reduce privacy violations, adversarial attacks, and misinformation.

Dr Ben Shneiderman, Emeritus Distinguished University Professor, and Founding Director of the Human-Computer Interaction Laboratory, at the University of Maryland, explains how “Human-Centred Artificial Intelligence (HCAI) systems represent a new synthesis that raises the importance of human performance and human experience”. Professor Shneiderman’s research is bridging the gap between the ethical principles of HCAI and the practical steps that can be taken for its effective governance. His work is an exciting shift from previous thinking. It opens doors to more reliable and safe applications by taking a human-centred approach. These fresh ideas have been very well received amongst both AI practitioners and researchers. The original research article has generated a huge interest for the journal, ACM Transactions on Interactive Intelligent Systems, with over 2500 downloads in the five months since its publication in October 2020.

HCAI versus AI

Traditionally, AI science research centred on emulating human behaviour, with AI engineering focused on replacing human performance. Typical applications included pattern recognition, language processing and translation, speech and image generation, and playing games, such as chess. In contrast, HCAI concentrates on enhancing human performance, making systems reliable, safe, and trustworthy; as well as supporting human self-efficacy, encouraging creativity and enabling social participation.

Machine-centred to human-centred

Previously, researchers and developers directed their attention towards developing AI algorithms and systems with the emphasis on the machines’ autonomy. Conversely, HCAI focuses on user experience design, putting human users at the hub of design thinking. Researchers and developers of HCAI systems measure their success with human performance and satisfaction metrics. They appreciate consumer needs and safeguard meaningful human control.

“By taking a human-centred approach, this work is opening doors for more reliable and safe applications.”

Governance and ethics

Professor Shneiderman moves beyond previous thinking with his recently published recommendations for the creation of reliable, safe, and trustworthy HCAI systems. These 15 novel recommendations, underpinned by a human-centred approach, pave the way for designers to translate ethical principles into professional practices, particularly those working in large organisations.

Human-Centred Artificial Intelligence (HCAI) focuses on user experience design, putting human users at the hub of design thinking.

HCAI system complexity

These constructive changes pose substantial challenges to software engineers, managers, and policy makers. Professor Shneiderman describes how his 15 recommendations can be summarised within the levels of organisational structures. He also recognises two sources of HCAI system complexity that make the implementation of all 15 recommendations difficult. Firstly, while individual components of the systems can be carefully tested, reviewed, and monitored, the complete HCAI system, for example self-driving cars, social media platforms, or electronic healthcare systems, require higher levels of independent oversight, together with reviews of any failures and near misses. Secondly, complete HCAI systems comprise many products and services, such as chips, software development tools, training data suppliers, web-based services, and equipment maintenance providers. These can be subject to change, which can occur on a daily basis, raising questions of robustness to change and safety in new contexts.

Reliable systems based on sound software engineering practices

Software engineering teams form the first level of the HCAI governance structure. The application of sound technical practices within such teams clarifies human responsibility. Software engineering practices include audit trails providing accurate records who did what and when, together with histories of who conducted design, coding, testing, and revisions to enable analysis of failures. These flight data recorders for robots and AI systems will enable analysis of failures and near misses to improve performance in realistic contexts.

Software engineering workflows are needed to meet user requirements in data collection, cleaning, and labelling. It also involves visualisation and data analytics to shed light on abnormal distributions, errors and missing data, clusters, gaps, and anomalies. User experience design encourages improved interfaces that facilitate the users’ understanding of how decisions are made and provide opportunities for recourse should they wish to challenge the decision.

Governance
Structures for
Human-Centered AI.

Verification, validation and bias testing

New methods of verification and validation testing are required for AI and machine learning algorithms embedded in HCAI systems, together with usability testing with typical users and other stakeholders. The aim is to maximise the chances of the HCAI system doing what users expect, while minimising the chance of unexpected harmful outcomes. Civil aviation, with its long history of benchmark tests for product certification provides good models for newer products and services.

Effective bias testing for machine learning training data involves converting ethical principles and bias awareness into action with in-depth testing of training datasets to verify that the data is current and fair in order to avoid bias in the treatment of minorities. In addition, explainable user interfaces are required to provide explanations so that people can understand the decisions that influence their lives, such as rejections for mortgages or job applications. Moreover, good explanations enable users to understand how they need to change their behaviour or if they should challenge the decision. Explanations are a legal requirement in many countries, with the European Union’s General Data Protection Regulation (GDPR) requirement of a “right to explanation”.

Safety culture through business management strategies

The safety culture within organisations comes from management strategies including leadership commitment to safety. Organisational leaders can make their commitment to safety clear with explicit statements about values, vision, and mission.

The inclusion of safety in job hiring position statements demonstrates commitment to current and potential employees. Diversity in hiring also shows commitment to safety. Safety cultures may require experienced safety professionals from a variety of fields, including health, human resources, organisational design, ethnography, and forensics.

This HCAI overview is the outline for Ben Shneiderman’s forthcoming book (2022) on this topic from Oxford University Press.

Failures and near misses

Many industries have established industry standards and professional associations promoting innovation, growth, and safety and alignment with industry standard practices. Safety-orientated organisations regularly report on their failures and near misses. The latter in particular provide rich data to inform maintenance, training, or redesign. Internal review boards for problems and future plans demonstrate commitment to a safety culture with, for example, regular scheduled meetings to discuss failures and near misses, where resilient efforts in the face of serious challenges can also be celebrated.

“Putting people at the centre can shift thinking to build future societies of which we can all be proud.”

Trustworthy certification by independent oversight

The third governance layer involves the independent oversight of external review organisations providing trustworthiness certification through industry-wide efforts. These include government interventions and regulation; accounting firms that conduct external audits; insurance companies that compensate for failures; non-governmental and civil society organisations involved in the advancement of design principles; and professional organisations and research institutes that develop standards, policies, and new ideas. Professor Shneiderman explains that the “key to independent oversight is to support the legal, moral, and ethical principles of human or organisational responsibility and liability for their products and services”. Devotion to implementing ethical principles to make safe and effective products and services becomes a competitive advantage.

elenabsl/Shutterstock.com

Future directions

The concerns are diverse, so organisations that draw researchers and practitioners from diverse disciplines are more likely to succeed. These 15 recommendations for governance structures will face many challenges. It is unlikely that any industry will be able to implement all of the recommendations at all three levels. Research and testing are required in order to validate each recommendation.

HCAI systems will be adapted over time with the integration of new HCAI technologies, the needs of different application domains, and to meet the changing desires, demands and expectations of all stakeholders. The global interest in HCAI systems is demonstrated by the United Nations International Telecommunications Union and its 35 UN partner agencies, as they apply AI to improve healthcare, wellness, environmental protection, and human rights.

For those wishing to see HCAI applied for social good, the amount of interest in ethical, social, economic, human rights, social justice, and responsible design is a positive sign. The sceptics, however, fear that poor design will lead to failures, bias, privacy violations, and threaten security. While these concerns are legitimate, the concentrated efforts of by well-intentioned researchers, business leaders, government policy makers, and civil society organisations point towards more positive outcomes. By taking a human-centred approach, this work is opening doors for more reliable and safe applications, with the aim of limiting the dangers and increasing the benefits of HCAI for individuals, organisations, and society. Professor Shneiderman recognises, however, that the changes he proposes “will take decades to be widely adopted but putting people at the centre can shift thinking to build future societies of which we can all be proud”.


What has been the most rewarding outcome of your research into HCAI?

Warm feedback makes me feel that these fresh ideas will be put to work, for example, an anonymous reviewer wrote: “As a practitioner/researcher, I really enjoyed… the very practical, layered approach he posits for the chance to engage system builders and designers of AI systems so that there are always humans in the loop, and both internal and external oversight processes to ensure safety and bias free benefits… very easy to digest summary of recommendations for ensuring safe and beneficial AI.”

 

References

  • Shneiderman, B. (2020). Bridging the Gap Between Ethics and Practice: Guidelines for Reliable, Safe, and Trustworthy Human-centered AI Systems. ACM Trans. Interactive Intelligent. Systems, [online] 10 (4), Article 26 (October 2020), 31 pages. https://doi.org/10.1145/3419764
  • Shneiderman, B. (2020). Human-centered artificial intelligence: Reliable, safe, & trustworthy. Int. J. Human-Computer Interaction, 36(6), 495–504. https://doi.org/10.1080/10447318.2020.1741118
  • Shneiderman, B. (2020). Human-Centered Artificial Intelligence: Three Fresh Ideas. AIS Transactions on Human-Computer Interaction, 12(3), 109-124. https://doi.org/10.17705/1thci.00131
DOI
10.26904/RF-135-1215426699

Research Objectives

Professor Shneiderman’s research interests include human-computer interaction, user interface design, information visualisation, and social media.

Bio

Emeritus Distinguished University Professor, Computer Science & Founding Director (1983-2000) Human-Computer Interaction Laboratory, University of Maryland. Fellow: AAAS, ACM, IEEE, NAI, and the Visualization Academy and Member, U.S. National Academy of Engineering, in recognition of his pioneering contributions in human-computer interaction and information visualisation.

Professor Shneiderman

Contact

E: [email protected]
W: http://www.cs.umd.edu/~ben

W: https://en.wikipedia.org/wiki/Ben_Shneiderman