Limited functionality available
Automation has underpinned growth of industry for hundreds of years. From automation of physical tasks in manufacturing to automation of digital processes in recent decades.
Historically, automation was more prevalent in operational areas, that is, in performing tasks that didn’t interact directly with the customer (e.g. robots that make stuff). As technology became more accessible to consumers over time, so did the ability of customer facing processes to be automated (e.g. ATMs).
With digital technology and machine learning, more customer interactions are being automated and we are now seeing rapid increase in algorithmic decisions on customer experience. Every day we interact with algorithms – from ordering pizza to booking a flight, from settling a dispute during online shopping to travelling on elevators with no buttons – each has a profound impact on our experience as consumers and our day to day life.
These innovations have allowed new industries to emerge and allow society to benefit from new products and services. Most of the industry dialogue, however, on the topic of automation has been one sided – either focusing on cost reduction or opportunity expansion. We rarely see an examination of the harmful side of automation, and what are appropriate levels of automation.
Algorithmic processes and automation can have both positive and negative impacts on customer experience as well as on society generally. We categorise impact of automation on customer experience in three broad categories:
• Processes that enhance customer experience
• Processes that make the customer experience poorer
• Harmful processes
1. Processes that enhance customer experience
In this category of automation, customers benefit from faster service, flexibility of when they can access the services, receive relevant and engaging communication from service provider. In other words, things improve for the consumer. Examples include convenience of online shopping or ordering a taxi.
The supplier of service also benefits from efficiencies through automation and improvements in customer engagement.
2. Processes that make the customer experience poorer
In this category, customers face increased complexity in interactions with the supplier and slower overall process due to time spent on finding the right information or reaching an officer for assistance resulting in an overall frustrating experience, or both.
Typically, this is a result of an algorithm being designed for “normal” customer scenarios, that is, an algorithm designed for when things run well. But when a customer’s need falls outside the “normal” range e.g. when there is a problem, such an algorithm can result in an overall poorer experience.
Think of calling a utility company when there is a problem. In this “exceptional” scenario, trying to navigate through an automated system can be frustrating and having a manual process and/or a real person to interact with can vastly improve the customer experience.
In this category of automation, the supplier of service faces risk of customer disengagement and complaints.
In some industries, consumers have formed negative perceptions of customer experience to the extent that some companies are using their customer service call centre capability as a means of differentiating themselves from the competition (e.g. an insurance company advertising “a real person will pick up the phone when you have an accident”).
3. Harmful processes
In this category, customers face harmful impacts of automation through privacy intrusions, unfair discrimination in automated decisions or other actions that may fall below community standards.
This may be unintended e.g. when the automation behaves in unexpected ways. For example, an algorithm based recruitment may result in unintentional discrimination against certain groups during the recruitment process.
In this scenario, the supplier or service provider faces compensation liability, regulatory actions and damage to brand. For example, banks have faced regulatory intervention on algorithm-based lending to borrowers who could not afford to pay back the loans.
Since this category of automation has the greatest harmful impact on consumers and society, it has resulted in public scrutiny in recent years and increasingly we are seeing legislation introduced internationally on holding companies and their officers accountable for decisions made by algorithms and automated systems. Examples include GDPR Right to object and automated individual decision-making, Principles of Accountability raised by ASIC and a recent discussion paper on Australia’s AI Ethics Framework.
How to build algorithms that enhance customer experience and comply with corporate and social objectives?
Things to consider during development of customer facing algorithms:
1. An algorithm should be designed and implemented in a way that meets objectives of the supplier as well as customers and society. Is there clarity of purpose of the automation project from corporate, customer and social perspectives?
2. There should be a clear understanding of value created through automation. For example, while some things are suited to automation others are inherently best served by a human (e.g. the barista that makes fabulous coffee).
3. There should be a tangible improvement in user experience, and not relying on improvements at face value from vendors’ marketing material. Improvements should be verified through field and lab experiments.
4. Algorithm should be designed to handle both “normal” and “exceptional” user scenarios and should have the ability to prompt human intervention when needed.
5. There should clear human ownership of an algorithm. Accountable persons should be empowered with full understanding of an algorithmic process and have control over its actions.
6. Regular testing and maintenance should be performed on the algorithm to check whether it is behaving within expectations as well as whether the system continues to be fit for purpose over time.
Deloitte Business Algorithms has developed a process for designing, implementing and monitoring algorithms that enables human ownership of algorithmic outcomes – “Human in the Loop” (HITL) – which requires human cognisance, ownership and sign-off that algorithmic output is consistent with corporate and social objectives. We help organisations articulate corporate and social objectives, design human-centered customer experiences, use quantitative and qualitative tools to develop intelligible algorithms, apply behavioural science expertise to encourage desirable behaviours and provide certification of algorithms.
Mudit is an actuary who specialises in commercialising opportunities using data and technology with a track record in building actuarial functions, product development, leading digital initiatives, business development, pricing, underwriting, risk management and statutory functions.