Share this article.

Developing regulatory compliance for artificial intelligence

  • Numerous principles and frameworks for regulating artificial intelligence (AI) and Trustworthy AI have been developed at the international level.
  • However, Professor Hannah Yee-Fen Lim at Nanyang Technological University, Singapore, has analysed that many of these are too vague or lack the necessary understanding of AI.
  • Regulators are challenged with encouraging innovation while still protecting people and society from harm.
  • While some work has focused on developing ethics principles, it is time to move onto creating solid legal frameworks and regulations.
  • To deploy ethical and trustworthy AI, regulators will have to seek assistance from those trained in both computer science and law.

Every day, numerous artificial intelligence (AI) algorithms make crucial decisions with minimal human oversight in areas such as autonomous vehicles, medical systems, and trading systems. Since they can be self-programming, intelligent algorithms’ behaviour can evolve in unforeseen ways.

Some organisations are becoming increasingly concerned that their algorithms could cause reputational or financial damage and ‘algorithm audits’ are being introduced in response to regulations and legislation. AI technology is emerging as a significant law specialisation, and it is becoming ever more important for lawyers to fully understand how it works.

Professor Hannah Yee-Fen Lim at Nanyang Technological University, Singapore, has contributed a chapter on the regulatory compliance of AI, the first published on the topic in the English-speaking world, in the recently published book titled ‘Artificial Intelligence: Law and Regulation.’ In the book, she analyses how the numerous principles and frameworks for regulating AI and Trustworthy AI have been constructed at the international level, but many of them are too vague or lack the necessary practical understanding of AI and how both AI and machine learning work. These deficiencies are observed in regulations formulated by organisations ranging from the European Union to the Organisation for Economic Co-operation and Development (OECD). Lim analyses and critiques these and other international efforts. She argues that besides being vague, these frameworks and principles do not deliver a straightforward description of what is expected from the AI players and developers. Moreover, they do not help governments ensure regulatory compliance.

The nature of AI

Lim begins by examining the nature of AI, establishing what AI is in a technical sense, and why it needs to be regulated. Considering the technology, she contrasts traditional hard-coded software with AI algorithms and machine learning – where computers can execute functions even though they haven’t been explicitly programmed to do so. AI algorithms do not learn in the same way as humans. Instead, they are trained using vast datasets. Even if the algorithm is mathematically sound, the size and quality of the training data can influence how it performs and whether it meets the required standards. Machine learning can take the form of unsupervised machine learning, supervised machine learning, and reinforcement machine learning, each with their own drawbacks.

“Even if the algorithm is mathematically sound, the size and quality of the training data can influence how it performs.”

Regulators are challenged with encouraging innovation while protecting people from harm. For instance, autonomous vehicles can be deadly, both for passengers and the public. Other AI systems may cause financial or physical harm to individuals.

International AI ethics

Lim’s research examines the ethics code and principles that governmental and intergovernmental organisations currently have in place. After reviewing the ethics principles and frameworks in Australia, the United States, the European Union, and OECD, Lim observes that these ethics principles are voluntary and vague, so they aren’t helpful for either developers or those who deploy the systems and want to ensure that their AI systems are compliant.

While some early works focused on developing ethics principles, it is time to move onto creating solid legal frameworks and regulations. Around the world, governments are grappling with how to create AI laws and regulations. Many take an incremental approach with broad-stroke policies that aim for ‘damage control’ and minimise the fall-out from using AI. They include concepts such as transparency, but then fail to explain what such transparency would involve.

Some organisations are becoming increasingly concerned that their algorithms could cause reputational or financial damage.

AI’s impact on general laws

The European Union has started to systematically review its substantive laws to determine which areas should be updated due to the use of AI. These areas include criminal law and consumer protection laws, where people can fall victim to illegal, deceptive practices resulting from AI applications. Data protection law is also affected due to AI’s reliance on data and big data. The United Kingdom Information Commissioner’s Office has issued practical guidance on how organisations’ use of AI can comply with data protection laws. In the area of civil liability laws, however, without a thorough understanding of AI technology, legislators and regulators have adopted poorly thought-out ways to apportion liability.

“Ethics principles are voluntary and vague, so they aren’t very helpful for developers wanting to ensure that their AI systems are compliant.”

Some industries are always highly regulated for reasons such as safety, risk, and protecting interests. These include banking and finance, medical and healthcare, and the transportation industry. The World Health Organization (WHO) has since released the ‘living’ WHO Guidance on Ethics & Governance of AI for Health, for which Lim was an appointed External Expert Reviewer. It is hoped that this will lead the way for AI in the medical sector. Being a ‘living’ document, it will continue to evolve. In the transport industry, however, there is still no international consensus on the trialling and regulation of autonomous vehicles. Nor are there international standards regarding the AI technology for autonomous vehicles.

Regulatory compliance using AI

The chapter concludes with a description of how AI can assist in complying with regulations. Legislators and regulators can’t assume they understand AI technology, as descriptions of the technology are often interpreted in different ways by those trained in different disciplines. To deploy ethical and trustworthy AI, they will have to seek assistance from those trained in both computer science and law.

The development of this emerging area of regulatory compliance is likely to continue for some time. A substantial body of literature has been created over the past five years, but substantive legal rules are only starting to take shape now. Moving forward, rules and regulations will likely be created for individual industries, particularly the high-risk industries. These can then be generalised to cover other use cases of AI.

Autonomous vehicles use artificial intelligence to make real-time decisions.

What inspired you to take double degrees in Computer Science and Law?

Back in the days when I wanted to apply for university, one could not study the discipline of law by itself. It had to be coupled with another discipline or degree such as arts, economics, or science. I was fascinated with how computers worked because when I was in Senior High School, the personal computer had just been invented. I remember my school had two little computers that were locked inside a room with iron bars on the windows and a 10cm-thick metal door, the kind of door that you would see only at banks where they held safe deposit boxes. I remember only those working on the school magazine were allowed into that very well-secured room. That was the beginning of my love story with computers, so that was why I wanted to study computer science at university.

I could have just studied computer science as a degree on its own, but I was also very much interested in justice. I cared, and I still care, a lot about treating people fairly and people being given their fair due. I studied Aristotle and Aquinas in high school, so I was very much attracted to their philosophies on justice. I also remember that at the time, my father was very supportive of me studying to join an honourable profession, such as the legal profession – in fact, more so than me being a computer scientist. So once I had attained the requisite grades to be admitted into the double degree programme, it was a very easy decision to take the double degree in computer science and law, although at the time, many of my teachers and people I would meet, including my law professors, were baffled as to my choice in my double degree. Back in those days, and perhaps it is still the same now, out of a cohort of around 220 law students, there were only about two or three who did the combination of computer science and law.

What is the most challenging aspect of this project?

Trying to be the mediator to those who are legally trained, such as lawyers and regulators, on the workings and functionalities and the technical aspects of technology, especially AI, is not easy. I think it will remain a challenge for many years to come.

Words and concepts often mean different things to different people, regardless of their discipline or educational level. However, for the unsuspecting, they may assume that their comprehension of AI is already correct and complete, when in fact, it is skewed and incorrect, and tainted with their own misinterpretation of words and concepts.

What does a typical day at work entail for you?

I’m not quite sure if there is a typical day as I’m often fielding many curveballs! The unexpected interruptions can often end up taking up a lot of time to settle. On a good day, I would have very few emails or phone calls to attend to (which is rare), and I can proceed with what I had planned to do, whether in terms of research, preparation for teaching, teaching, or attending meetings. More often than not, most of this goes out the window when some unexpected urgent administrative matter needs resolving, especially in relation to research projects, grants, funding, and so on.

What advice would you give a young researcher who’s interested in getting started in your field?

The advice I always give young researchers is to follow your heart. Discover what excites you and follow through. When we are pursuing our interests, work does not seem like work, but rather the excitement and interest will fuel our energies to keep going. And of course in this modern day and age, we need to be flexible and adaptable and not box ourselves or our careers into a corner or a straight line!

Related posts.

Further reading

Lim, HYF, (2022) Regulatory Compliance, in Kerrigan, C, (Ed) Artificial Intelligence: Law and Regulation. Cheltenham: Edward Elgar Publishing Ltd, 85–108.

Hannah Yee-Fen Lim

Hannah Yee-Fen Lim is uniquely qualified with double degrees in computer science and law. She is an internationally recognised legal expert on all areas of technology law, including AI, data, blockchain, cryptoassets, Fintech, and cybersecurity. She serves WHO, UNCITRAL, UNIDROIT & UK Law Commission as a legal expert advising on areas including AI and Cryptoassets.

Contact Details


Cite this Article

Yee-Fen Lim, H, (2023) Developing regulatory compliance for artificial intelligence. Research Features. Available at: 10.26904/RF-147-4769945365

Creative Commons Licence

(CC BY-NC-ND 4.0) This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License. Creative Commons License

What does this mean?
Share: You can copy and redistribute the material in any medium or format