Posted: 07 Jun. 2019 05 min. read

Regulating the Machines

Transparent Algorithms

Regulators and boards are concerned that they have a responsibility for automated decisions and they have no way of understanding how they are made. The learning mechanisms underlying algorithms mean that even the coders do not know why algorithm decisions are made.

For example, algorithms can be used to predict potential for breast cancer from analysing mammograms. Predictive power may be improved over a merely human review, but the medical specialists may not be able to say why there are indications of a propensity to cancer. Doctors and patients are put in the position of making serious choices without knowing what the relevant risk factor is.

This arises because we are not demanding transparency of algorithmic processes. We need our systems to be explainable, to be intelligible. We need humans to build and retain responsibility for systems that impact our life.

This is a new battle in the old war between individual agency and unfettered corporate and government power to make decisions that impact our lives without appropriate governance systems in place. The Royal Commission into financial services demonstrates that we need good governance systems and strong culture that encourages integrity. It would be a travesty if the response to human error was to abnegate responsibility and hand it over to the machines.

There are many precedents for determining how society gains comfort that complex systems are being appropriately applied. The “three lines of defence” approach to risk in financial services applies:

  1. The commercial activities of the entity
  2. The internal risk function that monitors first line activities, and
  3. The separate audit functions that assess processes and outcomes against established external criteria.


Audit and assurance processes for reviewing algorithms are readily available. But what is needed are appropriate first line processes for design, implementation, and monitoring of algorithms.

Practically, an appropriate professional, subject to a robust code of conduct, must take responsibility to ensure that algorithmic processes are consistent with corporate and social objectives. The actuarial profession is well suited for this task; actuaries are trained to manage and review corporate activities to encourage consistency with community expectations.

The actuarial role in finance is analogous to the engineer’s in construction; these technically-trained professions work under mandatory codes of conduct and professional standards to design and implement processes and outcomes which are then looked over by independent auditors.

Actuaries’ deep knowledge of data science and statistical methods applied in accordance with governance and ethical standards uniquely positions them to apply both technical skill and ethical thought on broader social ramifications. Designing and building algorithms needs not only enquiries such as “Are these data suitable for the task?” and “What are the best AI procedures available for this exercise?”, but also “For what purpose will this algorithm be applied?”, “Is the algorithm likely to be used in a socially acceptable way?”, “Is it possible this algorithm may produce undesirable psychological effects or exploit natural human frailties?” and “Are algorithmic processes and outcomes consistent with corporate and social objectives?”

The standards of fairness and acceptability used to assess algorithms are ultimately matters of societal deliberation. The technical processes for reviewing algorithms can be adapted from the current procedures actuaries use, such as the frameworks used to build and review the stochastic processes that underlie economic capital models.

This is also what happens now with estimation of insurance liabilities: actuaries don’t pretend to know the future, instead, they are trained to arrive at a prudent outcome taking a reasonable individual’s perspective. Importantly, it is one person’s opinion: boards rely on the experience and standing of the actuary. It works now for arcane, difficult mathematical and statistical problems, and the actuarial skillset needs little adaption to take on our growing problem of algorithm fairness.

Meet our author

Rick Shaw

Rick Shaw

Partner, Consulting

Rick is a partner of Consulting and part of the Actuaries practice. He has extensive overseas and Australian experience, and is recognised internationally for his work on capital modelling, regulatory systems and pricing and valuation. Rick’s primary focus is developing management information systems and integrating capital models into companies’ decision making. He has also advised regulators on actuarial valuation standards and capital model approval.