Posted: 20 Oct. 2021 10 min. read

The Future of algorithms in Asia Pacific: Preparing for the impact of the new algorithm regulation

This blog was originally posted on 20 October 2021, and was last updated on 10 January 2022, to reflect that the regulations drafted in 2021 have been approved.

Introduction
A growing number of organisations are leveraging the power of algorithms to optimise their business models and provide a greater range of services. Even though they are unnoticeable to the human eye, sophisticated algorithms are tirelessly operating behind the scenes and influencing all aspects of human life. From advertisements published on social media, to trading strategies of large financial companies, algorithms are everywhere.

Naturally, the growing use of algorithms has resulted in an increased public interest, especially around trustworthiness, and has prompted regulators to apply greater scrutiny. Many jurisdictions are proposing legislative frameworks, a prominent example being the EU Artificial Intelligence Act.

More recently, on January 4 2022, The Cyberspace Administration of China (CAC) approved the Provisions on the Administration of Algorithm Recommendations for Internet Information Services; rules for the regulation of algorithms across the People’s Republic of China (PRC).

Addressing topics such as data governance, human oversight, and user protection, the regulations signal a shifting environment whereby the historically unregulated online market will no longer be able to grow unfettered. China’s domestic regulations will not only impact algorithm providers who operate in China, but will likely have global implications as other governments, including Australia and the US, look to their domestic governance and tighten their algorithmic regulations.

This blog will look at some of the key areas for consideration in relation to CAC’s regulations to help impacted organisations across Asia prepare their controls frameworks.

Understanding the Cyberspace Administration of China’s Algorithm Regulations

Scope and supervision
Formally issued on 4 January 2022, the regulations will come into effect from 1 March 2022. Under Article 2 of the regulations, any Internet Information Services provider who operates within the territory of the PRC is subject to the regulations, irrespective of the organisation’s country of origin. Consequently, any Australian organisation which provides digital services within the PRC, must adhere to the regulations. The Cyberspace Administration of China (CAC) is the designated authority to provide supervision and enforcement.

Article 6 of the regulations identify some of the algorithm uses which will be prohibited. These include endangering national security, disrupting economic and social order, or infringing on the legitimate rights of others.

Core principles for the development of controls frameworks
The regulations require organisations to operate their algorithms in accordance with a trustworthy framework, which is defined in Article 4 to include fairness, openness, transparency, scientific rationality, and honesty. All subsequent regulations imposed upon algorithms are based upon these core principles. In order to adhere to these principles, organisations must design their internal controls frameworks to adequately identify AI risk factors and establish mitigating measures, which have to be reviewed, assessed and verified regularly.

The importance of establishing effective controls policies and practices is introduced in Article 5 and further expounded upon in the remaining articles of the regulations. Self-discipline is repeatedly emphasised throughout the regulations, implying that the development of company-level controls frameworks based on the core governing principles from Article 4 must be developed first. Once companies have created clear trustworthy AI frameworks, the establishment of robust and sound industry standards will follow.

Security assessments and data governance
Organisations and providers of algorithm services are required to perform security assessment monitoring to ensure that no external factors could influence the outcomes of the models. The ability to shut down selected algorithms is an important element which companies need to consider, especially in instances where data bias or error is discovered within either the input data or the outcomes of the algorithm. Organisations must also establish adequate governance structures and communication channels as a precaution in case such inadequacies in design are discovered.

Robustness and efficiency must be preserved through optimised data governance practices, so the information provided to users is not biased or misleading. Ensuring source data is free of bias and errors is thought to be important in eliminating discriminatory outcomes or the possibility for the AI to influence human behaviour in an adverse manner.

Prioritising the user
Several articles are dedicated to the design of algorithm services from the end-user point of view. The text is heavily human-centred, and a significant portion of the regulations are dedicated to achieving transparency for the benefit of the users. Specifically, organisations must ensure that they clearly communicate all relevant information regarding the intended purpose and the operating mechanism of the models with their users.

Algorithm service providers must not design their products to influence certain behaviours which violate established norms and practices, such as inducing users to high technological consumption. Users must be provided with adequate options and capabilities to manage algorithms and be provided with effective opt-out options should the AI violate their preferences in any form.

Furthermore, the regulations impose strict limitations on the management of user models and propose firms constantly monitor that user rights are preserved. Activities such as the manipulation of lists and rankings of search results, blocking information, falsely registering accounts, hijacking webpage traffic or controlling hot searches are thought to be interfering with the fair access to information and consequently promotes unacceptable values. The end user is central to the regulations and algorithms that attempt to manipulate them will be prohibited.

What is next?
The speed of change in the Chinese regulatory landscape will require organisations to act now to avoid falling behind in safeguarding algorithm compliance. The regulations are focused on the human element and impose strict restrictions on algorithms which do not have sufficient controls in place to protect users and enhance trustworthiness.
Organisation’s providing algorithm services within the PRC must re-assess their internal risk management frameworks and adapt to the requirements in the regulations. The technology behind algorithms is complex and companies must ensure that they have the right skillset and knowledge within their teams. Existing policies and controls should be re-considered and designed to fit the new regulations, with everything duly documented. The regulations ask for continuous monitoring and reviews and therefore, firms must ensure adequate resourcing and oversight. In order to achieve this, it is essential that governance frameworks are adapted to promote compliance from top to bottom.

The regulations issued by China were first proposed in August 2021 and have been swiftly approved and incorporated within the existing legislative framework. Re-designing controls frameworks will require time, however delays in action could lead to substantial financial implications, so there are clear opportunities to be had by acting quickly. If a company is found to provide services that do not protect the legitimate rights and interests of customers, it could be subject to the penalties outlined in Chapter 5.

Whilst the scope of these regulations is currently limited to organisations which operate within the territory of the PRC, it does signal a marked change in the AI regulatory landscape. China’s regulations follows the EU Artificial Intelligence Act, as a landmark legislation which intends to regulate the development, distribution and use of algorithms and artificial intelligence.

China and the EU have now set a regulatory precedent and international bodies and governments will certainly be watching the market implications closely.

Other national regulators, including in Australia, are expected to soon prepare their own respective legislations to regulate AI within their own markets We strongly advocate that our clients prepare themselves for the introduction of such regulations, including through appreciate governance and controls to enable readiness and compliance.

Algorithms are complex and navigating through the regulations could be challenging. Should you wish to understand more about the algorithmic regulatory landscape and how it might impact your business, please do not hesitate to get in touch with us.

This blog was also co-authored by Mark Cankett and Barry Liddy.

More about our authors

Scott Jermy

Scott Jermy

Director, Algorithm Assurance

Scott currently works as Global Project Lead for Algorithm Assurance. Having joined Deloitte from the banking industry, Scott specialises in Banking and Treasury Capital Markets, in particular within the area of algorithmic risk assurance. Scott has worked for Deloitte in London and New York, prior to joining Deloitte in Australia. Scott and his team support Deloitte’s banking and cross-sector clients in managing their algorithmic risk, through providing assurance and advice over algorithmic governance and controls frameworks. Currently, Scott is a member of the AI Institute Working Group in Australia. Globally Scott is a member of the Global Algorithm Assurance Central Team, as well as co-ordinator of the EMEA, Asia Pacific, North America and SLATAM Algorithm Assurance working groups.

Jonathan Sykes

Jonathan Sykes

Partner, Financial Services

Jonathan is a partner, based in Sydney, focusing on financial risk and regulations in the banking industry. He has extensive experience in assisting clients navigate through large transformation projects, including risk transformation and Basel related projects. He was formally a partner in the Financial Services Advisory practice in Deloitte South Africa where he was also the IFRS 9 and credit risk leader for Deloitte Africa.