Daniel
Daniel
Daniel is the Associate Editor at OFLS, and an MSc candidate in Law and Finance (2021) at the University of Oxford

Event Report — Robot Directors and Directors’ and Officers’ Liability ‘Insurance’: Adapt or Die

Event Report — Robot Directors and Directors’ and Officers’ Liability ‘Insurance’: Adapt or Die

Boardrooms function as the nucleus of the corporate machine, wielding control over the key decisions within the firm. The high-stakes nature of such decisions typically invokes the most human of interactions through tense negotiations and bitter debates. However, the advent of artificial intelligence (AI) threatens to upend the traditional anatomy of corporate boards through so-called ‘robo-directors’. In an enlightening presentation, Dr. Adolfo Paolini, consultant at Clyde & Co., introduced the growing potential of ‘robo-directors’ within the boardroom, as well as the key issues presented by such developments for corporate governance and insurance liability.

The predictive power of AI, particularly machine learning technologies, provide a powerful decision-making tool for corporations. The ability to analyse complex data sets and deliver quick, decisive courses of action has significant potential to improve the effectiveness of board decision-making and make positive contributions to corporate value over the long-term. In fact, ‘robo-directors’ have already been employed into some corporate boards in the United States and Hong Kong. However, Dr. Paolini highlights the ‘black box’ issue of such technologies which relates to the inability to observe the precise processes through which they reach decisions. The inability to explicitly view ‘the mind’ of such directors presents pressing issues for embedding such technologies into existing frameworks.

These issues arise when an officers’ and directors’ conduct is called into question. For example, a typical Professional Indemnity policy only covers officers and directors who have made a negligent decision as opposed to a fraudulent one. As Dr. Paolini states, the lack of consciousness of ‘robo-directors’ will require insurers to ‘adapt or die’ if they are to continue to extract value from this market going forward.

Although many questions remain unanswered, Dr. Paolini points to several means of mediating such issues. Firstly, stakeholders within the company will need to control the power of AI decision-makers, ensuring that they do not possess a final say. Such technologies should be sought for their assistive capacities only and should be adequately supervised. Furthermore, in relation to frameworks themselves, strict liability is likely to be the essential safety net for establishing liability of AI-related actions, as it removes the critical thought considerations currently required. However, he admitted that it would take considerable effort to comprehensively adapt existing frameworks to such standards.

Singularity is unlikely to be achieved between AI and humans. However, the growing capabilities of ‘robo-directors’ will likely narrow the gap going forward. In his concluding remarks, Dr. Paolini left us with his three guiding rules for governing AI within the boardroom. Paraphrasing Isaac Asimov’s Three Laws of Robotics, he provides that ‘robo-directors’ should: (1) not injure the company so as to breach its duties; (2) obey the orders of the Board except when this conflicts with rule 1; and (3) protect their own position provided rules 1 and 3 are not breached.

The Oxford Fintech and Legaltech Society would like to extend its thanks to Dr. Paolini for taking the time to speak with us and for sharing his insights into this area. We look forward to collaborating with him again soon.

comments powered by Disqus