Skip to main content

Trusting the Algorithms

Regulators and boards are concerned that they have a responsibility for automated decisions that they have no way of understanding how they are made. The learning mechanisms underlying algorithms mean that even the developers might not be able to explain why algorithm decisions are made.

For example, algorithms can be used to predict potential for breast cancer from analyzing mammograms. Predictive power may be improved over a merely human review, however, there are instances where the medical specialists may not be able to say why there are indications of a propensity to cancer. Doctors and patients are put in the position of making serious choices without knowing what the relevant risk factor is.

This arises because we are not demanding transparency of algorithmic processes. We need our systems to be explainable, to be intelligible. We need humans to build and retain responsibility for systems that impact our life.

This is a new battle in the old war between individual agency and unfettered corporate and government power to make decisions that impact our lives without appropriate governance systems in place. The Royal Commission into financial services demonstrates that we need good governance systems and strong culture that encourages integrity. It would be a travesty if the response to human error was to abnegate responsibility and hand it over to the machines.

There are many precedents for determining how society gains comfort that complex systems are being appropriately applied. The “three lines of defense” approach to risk in financial services applies:

  1. The commercial activities of the entity
  2. The internal risk function that monitors first line activities, and
  3. The separate audit functions that assess processes and outcomes against established external criteria.

Audit and assurance processes for reviewing algorithms are readily available. But what is needed are appropriate first line processes for design, implementation, and monitoring of algorithms.

Practically, an appropriate professional, subject to a robust code of conduct, must take responsibility to ensure that algorithmic processes are consistent with corporate and social objectives. The actuarial profession is well suited for this task; actuaries are trained to manage and review corporate activities to encourage consistency with community expectations.

The actuarial role in finance is analogous to the engineer’s in construction; these technically trained professions work under mandatory codes of conduct and professional standards to design and implement processes and outcomes which are then looked over by independent auditors.

Actuaries’ deep knowledge of data science and statistical methods applied in accordance with governance and ethical standards uniquely positions them to apply both technical skill and ethical thought on broader social ramifications. Designing and building algorithms needs not only enquiries such as “Are these data suitable for the task?” and “What are the best AI procedures available for this exercise?”, but also “For what purpose will this algorithm be applied?”, “Is the algorithm likely to be used in a socially acceptable way?”, “Is it possible this algorithm may produce undesirable psychological effects or exploit natural human frailties?” and “Are algorithmic processes and outcomes consistent with corporate and social objectives?”

The standards of fairness and acceptability used to assess algorithms are ultimately matters of societal deliberation. The technical processes for reviewing algorithms can be adapted from the current procedures’ actuaries use, such as the frameworks used to build and review the stochastic processes that underlie economic capital models.

This is also what happens now with estimation of insurance liabilities: actuaries don’t pretend to know the future, instead, they are trained to arrive at a prudent outcome taking a reasonable individual’s perspective. Importantly, it is one person’s opinion: boards rely on the experience and standing of the actuary. It works now for arcane, difficult mathematical and statistical problems, and the actuarial skillset needs little adaption to take on our growing problem of algorithm fairness.

Deloitte offerings

The Business Algorithm team in Deloitte Australia will provide actuarial sign-off that algorithmic processes meet specified criteria. The sign-off will include design, implement, and monitor of the algorithm process and its performance in a layout that will be transparent, intelligible and explainable. This will allow companies to bring back their trust and confident on business algorithms and provide opportunity to claim insurance like a commercial tool used in the business. This will be “Professional Service” but not “Prescribed Actuarial Advice” under the Actuaries Institute’s Code of Conduct, as at this stage there is no legislative imperative.  The Code’s requirement for independence and impartiality apply.  Actuarial certification is similar to (audit) assurance but has some distinct aspects.  Actuarial work is elementally subjective and does not necessarily require complete information for certification.  The raison d’etre of the actuarial profession is the exercise of judgement to balance conflicting (financial) objectives.  This construct is ideal for opining on algorithms, where conflicting corporate and social objectives will need to be balanced.  Standard qualifications will apply to our work.

Activities will include questions and tests such as:

  • Does your organization have a good handle on where algorithms are deployed?
  • Have you evaluated the potential impact should those algorithms function improperly?
  • Does senior management within your organization understand the need to manage algorithmic risks?
  • Do you have a clearly established governance structure for overseeing the risks emanating from algorithms?
  • Do you have a program in place to manage these risks? If so, are you continuously enhancing the program over time as technologies and requirements evolve?
  • Stress and scenario testing.
  • Testing outcomes against corporate goals.
  • Model implementation review.
  • Data integrity and bias tests.
  • Defining success and setting boundaries for algorithms.
  • Review of feedback loops.
  • What implicit assumptions underlie the algorithm?
  • Are the implicit assumptions coherent and consistent?