The rise of data and AI ethics Managing the ethical complexities of the age of big data

6 minute read 24 June 2019

As technology tracks huge amounts of personal data, data ethics can be tricky, with very little covered by existing law. Governments are at the center of the data ethics debate in two important ways.​

Governments have defined almost every conceivable aspect of property ownership. Can I cut down my neighbors’ tree if it grows over my patio? Only those limbs that grow over the property line. Can I play music on my porch? Only if it doesn’t interfere with your neighbor’s enjoyment of their property. The complexity of the legal questions surrounding physical property is immense, but years of developing constitutions and legislation, as well as court decisions, have made the gray areas manageably small, and property owners and other citizens understand their property rights and responsibilities.

Learn more

View Government Trends 2020

Download the full report or create a custom PDF

Subscribe to receive related content

The same is not true of data rights. The entire notion of “data and AI ethics” has become of central interest for many individuals, businesses, and governments due to the burgeoning role of data in our lives. The internet, the Internet of Things, and sensors can track an astounding amount of data about a person—from sleep habits, to moment-to-moment location, to every keyboard click ever executed. Moreover, as artificial intelligence (AI) systems make more decisions, AI ethics become increasingly relevant to public policy. If a self-driving car faces a dangerous situation, should it choose the course least risky to the passengers or to a pedestrian—even if the pedestrian is at fault? Data ethics can be tricky, and very little of it is defined by existing law.

The United States Constitution guarantees “the right of the people to be secure in their persons, houses, papers, and effects”—but how does that apply to an individual’s data and privacy? In what ways may companies, or individuals, or even governments that collect data about an individual use that information?

Here are four of the biggest issues driving the conversation around data and AI ethics:

Data and AI ethics initiatives

  1. Privacy. Citizens face widespread threats to their privacy, such as data collected on smartphones, while governments could potentially examine a citizen’s online activity. Law enforcement agencies worldwide are deploying facial recognition technology, and retail outlets have begun cataloging shoppers with facial recognition, which can be matched to their credit cards—often without customers’ awareness or consent.1 These occurrences are increasingly common.
  2. Lack of transparency. AI-based algorithms are often closely held secrets or are so complex that even their creators can’t explain exactly how they work. This makes it harder to trust their results. From bank loans to college admissions to job offers, decisions are often made based on data from these complex algorithms. Which decisions might be made by “secret” criteria? Which aren’t? And what role should government play in ensuring transparency?

  3. Bias and discrimination. Real-world bias can shape algorithmic bias. Some court systems have begun using algorithms to evaluate the criminal risk potential of criminal defendants and even begun using this data while sentencing. However, criminal risk scores and research have raised concerns over potential algorithmic bias and led to calls for greater examination.2

    Understanding how an algorithm works will not alone solve the broader issue of discrimination. The critical factor is the underlying data set. If the underlying data has historically comprised a certain gender, race, or nationality, then the results could be biased against cohorts outside of those groups.

  4. Lack of governance and accountability. One of the critical issues in the AI debate is the big question of who governs the AI system and data. Who creates ethical standards and norms? Who is accountable when unethical practices emerge? Who authorizes the collection, storage, and destruction of data?

These high-profile issues, in turn, are driving responses by stakeholders ranging from governments to corporations. To learn more, read Can AI be ethical? Why enterprises shouldn’t wait for AI regulations.

Government’s role in data and AI ethics

Governments are at the center of the data ethics debate in two important ways. First, governments “own” a massive amount of data about citizens, from health records to what books a citizen checked out of the library. Second, the government is a “regulator” of the corporate use of data collected online.

Governments are increasingly considering their regulatory responsibility. For instance, the European Union’s General Data Protection Regulation (GDPR) provides strict controls over cross-border data transmissions, gives citizens the right to be “forgotten,”3 and mandates that organizations, including government agencies, provide “data protection by design” and “data protection by default.”4

Similar to GDPR, the state of California’s Consumer Privacy Act aims for more stringent privacy requirements.5 Other planned global efforts include South Korea’s Ministry of Commerce, Industry, and Energy’s “Robots Ethics Charter” that provides manufacturers with ethical standards for programming the machines,6 and Canada and France’s international panel to rein in unethical use of AI.7

Evolving privacy standards and ethics frameworks

Many governments are formalizing their approach to algorithmic risks. The UK government, for example, has published a data ethics framework to clarify how public sector entities should treat data.8 Canada has developed an open-source Algorithmic Impact Assessment questionnaire that can assess and address risks associated with automated decision systems.9 The European Union, too, has been gathering comments from experts on its ethics guidelines for AI.10

Developing AI toolkits

Many big technology firms are also invested in addressing these challenges. IBM recently released the AI Fairness 360 open-source toolkit to check unwanted bias in data sets and machine learning models. Similar initiatives include Facebook’s Fairness Flow and Google’s What-If Tool.11 In another example, the Ethics and Algorithm Toolkit was developed collaboratively by the Center for Government Excellence, San Francisco’s DataSF program, the Civic Analytics Network, and Data Community DC.12

A consortium approach to AI ethics

Industry consortia are developing standards and frameworks in their industries; examples include:

  • The Council on the Responsible Use of AI formed by the Bank of America and Harvard Kennedy School’s Center for Science and International Affairs;13
  • A consortium in Singapore to drive ethical use of AI and data analytics in the financial sector;14 and
  • The Partnership on AI, representing some of the biggest technology firms, including Apple, Amazon, Google, Facebook, IBM, and Microsoft, to advance the understanding of AI technologies.15

Data signals

  • One hundred and seven countries have formulated legislation to protect data and privacy of citizens.16
  • Since 2013, 9.7 billion data records were lost or stolen globally.17
  • More than 95,000 complaints were received by the Data Protection Authorities under the EU GDPR legislation since its launch.18
  • From 2017 to 2018, media mentions on AI and ethics doubled. More than 90 percent of mentions indicated positive or neutral sentiment.19
  • The UK government launched a Centre for Data Ethics and Innovation with a £9 million budget.20

Moving forward

  • Acknowledge the need for ethics in the AI era. Create an AI ethics panel or task force by tapping into the expertise from the private sector, startups, academia, and social enterprises.
  • Create an algorithmic risk management strategy and governance structure to manage technical and cultural risks.
  • Develop governance structures that monitor the ethical deployment of AI.
  • Establish processes to test training data and outputs of algorithms, and seek reviews from internal and external parties.
  • Encourage diversity and inclusion in the design of applications.
  • Emphasize creating explainable AI algorithms that can enhance transparency and increase trust in those affected by algorithm decisions.
  • Train developers, data architects, and users of data on the importance of data ethics specifically relating to AI applications.

Potential benefits

  • More accountability from developers;
  • The rise of AI for social good; and
  • Growing ecosystem approach to AI.

Risk factors

  • Threat to citizens’ right to privacy;
  • Lack of transparency; and
  • Bias and discrimination.

Read more about data and AI ethics in the Chief Data Officer Playbook.

© 2021. See Terms of Use for more information.

✓ Link copied to clipboard