Posted: 13 Sep. 2021 5 min. read

The future of algorithms in China

Preparing for the impact of the new algorithm regulation

Introduction

A growing number of firms are leveraging the power of algorithms to optimise their business models and provide a greater range of services. Even though unnoticeable to the human eye, sophisticated algorithms are tirelessly operating behind the scenes and influencing all aspects of human life. From the ads published on social media to the trading strategies of large financial companies, algorithms are everywhere.

Naturally, the growing use of algorithms has resulted in an increased interest by the wider public, especially around trustworthiness, and has prompted regulators to apply greater scrutiny. Many jurisdictions are adopting legislative frameworks, a prominent example being the proposed EU Artificial Intelligence Act. This now extends to China, too. The People’s Republic of China has recently issued a proposal for the regulation of algorithms. The recommendations address topics such as data governance, human oversight, and user protection, and we believe that they will have a significant impact on algorithm providers.

This blog will look at the proposed regulation in detail, outlining the key areas for consideration to help you prepare your control frameworks.
 

Understanding the Chinese algorithm recommendations

Scope and supervision

According to Article 2 of the proposal, should a company provide algorithm services within China, it automatically falls under the scope of the proposal, irrespective of the country of establishment. This means that should an EU or UK firm, for instance, provide algorithms within the territory of China, it must adhere to this proposal. The Cyberspace Administration of China (CAC) is the designated authority to provide supervision and enforce the regulation.

The proposal identifies some of the algorithm uses which will be prohibited – endangering national security, disrupting economic and social order, or infringing on the legitimate rights of the citizens are some of the activities which will be banned, as per Article 6.

Core principles for the development of control frameworks

The proposal requires firms to operate their algorithms in the framework of trustworthiness which translates to fairness, openness, transparency, scientific rationality, and honesty, as outlined in Article 4. The subsequent requirements have their foundations in these core principles. In order to adhere to these principles, companies must design their internal control frameworks to adequately identify AI risk factors and establish mitigating measures, which have to be reviewed, assessed and verified regularly.

The importance of effective control policies and practices in place is introduced in Article 5 and further emphasised in the remaining articles of the proposal. Self-discipline is a key word in this article, implying that development of company-level control frameworks must be based on the core governing principles from Article 4 and eventually translate into the establishment of robust and sound industry standards.

Security assessments and data governance

Providers of algorithm services will be required to perform security assessment monitoring to ensure that no external factors could influence the outcomes of the models. The ability to shut down selected algorithms is an important element which companies need to consider, especially in instances when data bias or error is discovered within either the input data or the outcomes of the algorithm. Firms must also establish adequate governance structures and communication channels for instances when such inadequacies in design are discovered.

Robustness and efficiency must be preserved through optimised data governance practices, so the information provided to users is not biased or misleading. Employing data free of bias or error is thought to be important in eliminating any form of discrimination and the possibility to influence human behaviour in an adverse manner.

Prioritising the user

Several articles are dedicated to the design of algorithm services from the end-user point of view. The text is heavily human centred and significant portion of the proposal is dedicated to achieving transparency for the benefit of the users. Companies must ensure that users are communicated all the relevant information on the intended purpose and the operating mechanism of the model.

Algorithm service providers must not design their products to influence certain behaviour which violates established norms and practices, such as inducing users to high consumption. Users must be provided with adequate options and capabilities to manage the algorithm and turn it off should it violate their preferences in any form.

Furthermore, the proposal imposes strict limitations on the management of user models and firms must constantly monitor that user rights are preserved. Activities such as manipulation of lists and rankings of search results, blocking information, falsely registering accounts, hijacking webpage traffic or controlling hot searches are thought to be interfering with the fair access to information and promoting unacceptable values. The end user is central to the proposed regulation and algorithms that attempt to manipulate them will be prohibited.
 

What is next?

The speed of change in the Chinese regulatory landscape will require firms to act now to avoid falling behind in safeguarding algorithm compliance. The proposal is focused on the human element and imposes strict restrictions to algorithms which do not have sufficient controls in place designed to protect users and enhance trustworthiness.

Companies providing algorithm services within the territory of China must re-assess their internal risk management frameworks and adapt to the requirements in the new proposal. The technology behind algorithms is complex and companies must ensure that they have the right skillset and knowledge within their teams. Existing policies and controls should be re-considered and designed to fit the new recommendations, with everything duly documented. The proposal asks for continuous monitoring and reviews and therefore, firms must ensure adequate resourcing and oversight. In order to achieve this, it is essential that governance frameworks are adapted to promote compliance from top to bottom.

The algorithm recommendations issued by China are at consultation stage, but we can expect that swift progress will be made in incorporating them into the existing legislative framework. The clock is ticking – the deadline for companies to submit their comments about the proposal is 26 September 2021. Re-designing controls frameworks will also require time and delays in action could lead to substantial financial implications, and there are clear opportunities to be had by acting quickly. If a company is found to provide services that do not protect the legitimate rights and interests of customers, it will be subject to the penalties outlined in Articles 27, 28 and 29.

Algorithms are complex and navigating through the proposal could be challenging. Should you wish to understand more about this new proposal in China, please do not hesitate to get in touch with us.
 

Resources:

Sign up for the latest updates

Key contacts

Mark Cankett

Mark Cankett

Partner

Mark is a Partner in our Banking & Capital Markets Audit Group in London. He is a leading member of our Benchmarks Assurance & Advisory team and a co-Chair of Deloitte’s Global IBOR Reform Steering Committee. Mark has 16 years’ experience across financial services audit and assurance, regulatory compliance, regulatory investigations and financial services disputes. This experience has provided him with a strong technical understanding of wholesale markets, financial benchmarks and related risk and control frameworks. His experience across the industry with respect to IBOR reform has provided him with a unique perspective on the regulatory reform agenda and he is actively assisting clients in this space at present.

Barry Liddy

Barry Liddy

Director

Barry is a Director within Banking & Capital Markets and is a qualified accountant (ACA). He has over 15 years’ experience spread across industry and financial services. He has worked extensively with Investment Banks in enhancing control frameworks and assessing their design and operating effectiveness to ensure full regulatory compliance. Barry is currently a member of Deloitte’s Algorithm Assurance team focused on assisting clients comply with recent requirements under MiFiD II (RTS 6). He specialises in supporting Banks, Asset Managers & HFT firms ensure they have strong controls in place to address the key risks associated with the development and use of algorithmic trading.

Dzhuneyt Yusein

Dzhuneyt Yusein

Assistant Manager

Dzhuneyt is an Assistant Manager within the Markets Assurance team of the Banking & Capital Markets Audit & Assurance Group. Dzhuneyt’s experience covers major regulatory transformation projects such as the IBOR transition and MiFID II Transaction reporting for a variety of FS clients across regions, including Tier 1 banks. His core skills and previous experience result in his detailed understanding of internal control processes and frameworks in banks and other financial institutions.