Limited functionality available
Back in the 1940s, New York City Parks Commissioner, Robert Moses, wanted to build a highway from New Jersey to Long Island that cut through the heart of Washington Square and Lower Manhattan. One of the plans was to split the square in two, joined by an elevated pedestrian walkway, over the highway. It was the fad of the time to fashion our cities around the needs of cars.
There was opposition, led by Jane Jacobs, and the government planners were not honest about their plans. It was a good outcome, Washington Square was saved, indeed Greenwich village and Lower New York were saved, and now no-one who values culture and community would suggest such a plan. We preserved a concept of a city being designed to meet the human, emotional needs of people.
A similar conflict faces us with the rise of algorithms. Algorithms allow efficient transmission and codification of our personal data to make decisions that impact our lives. But a governance system is needed to ensure what we lose as humans does not outweigh the gains. At times it is as if we feel compelled to create a Black Mirror society with surveillance everywhere and our lives reduced to data.
The collective thinking in Silicon Valley tends to assume that the evolution and ubiquity of technology is paramount, and that humans must adapt to the needs of technology: You have no privacy, get used to it. But whose purpose does this serve? I believe that AI should be intentionally designed to meet the needs of humans and to help us overcome our human fallibilities. We need to demand human-centred design in algorithmic processes.
Unravelling algorithmic processes is complex; algorithms are able to handle thousands of variables and relationships between variables far beyond what any human can consider. The data scientists who build algorithms focus on technical excellence and tend to assume that technology is paramount. They are not trained or inclined to consider governance structures or ethical considerations. The people who do consider societal standards and governance matters tend to be daunted by the technical complexity of the algorithmic processes.
This disparity of skills is exacerbated by inherent trade-offs in algorithmic processes. As well as bias that can arise from training data and the discriminatory tendencies in algorithmic codes, all algorithmic processes have to compromise between goodness-of-fit and “balance”, a statistical notion that reflects measures of fairness. What this means is that algorithms which are well-fitted to historical data will inevitably lead to discriminatory outcomes.
For algorithms to produce decisions which are consistent with ethical standards, these standards need to be enunciated and adjustments have to be made to the algorithm output. This is complicated by the Russian doll structure of algorithmic processes. For example, the schematic below is YouTube’s recommendation algorithm architecture. YouTube has potentially about 10,000 data items on each user. The algorithmic process is to distil down these factors in a step-wise manner to produce decisions (video recommendations in this case): 10,000 variables is reduced to 1,000, then 100, then one or a list of recommendations.
It is optimal to assess output at each intermediate stage and adjust output away from pure goodness-of-fit towards self-imposed standards of fairness. The outcomes are transparency and human ownership at each level, and of the overall process.
We need procedures for designing, implementing and monitoring algorithms to ensure outcomes are consistent with corporate and social objectives. This is a difficult task given the broad set of skills required, technical and ethical, and the actuarial profession seems uniquely suited to meeting this challenge.
Actuarial work has been described as more of an art than a science and the profession can be our Jane Jacobs in this world where the Robert Moseses of AI push us toward a dystopia we don’t want.
Rick is a partner of Consulting and part of the Actuaries practice. He has extensive overseas and Australian experience, and is recognised internationally for his work on capital modelling, regulatory systems and pricing and valuation. Rick’s primary focus is developing management information systems and integrating capital models into companies’ decision making. He has also advised regulators on actuarial valuation standards and capital model approval.