As technology tracks huge amounts of personal data, data ethics can be tricky, with very little covered by existing law. Governments are at the center of the data ethics debate in two important ways.
Governments have defined almost every conceivable aspect of property ownership. Can I cut down my neighbors’ tree if it grows over my patio? Only those limbs that grow over the property line. Can I play music on my porch? Only if it doesn’t interfere with your neighbor’s enjoyment of their property. The complexity of the legal questions surrounding physical property is immense, but years of developing constitutions and legislation, as well as court decisions, have made the gray areas manageably small, and property owners and other citizens understand their property rights and responsibilities.
Subscribe to receive related content
The same is not true of data rights. The entire notion of “data and AI ethics” has become of central interest for many individuals, businesses, and governments due to the burgeoning role of data in our lives. The internet, the Internet of Things, and sensors can track an astounding amount of data about a person—from sleep habits, to moment-to-moment location, to every keyboard click ever executed. Moreover, as artificial intelligence (AI) systems make more decisions, AI ethics become increasingly relevant to public policy. If a self-driving car faces a dangerous situation, should it choose the course least risky to the passengers or to a pedestrian—even if the pedestrian is at fault? Data ethics can be tricky, and very little of it is defined by existing law.
The United States Constitution guarantees “the right of the people to be secure in their persons, houses, papers, and effects”—but how does that apply to an individual’s data and privacy? In what ways may companies, or individuals, or even governments that collect data about an individual use that information?
Here are four of the biggest issues driving the conversation around data and AI ethics:
These high-profile issues, in turn, are driving responses by stakeholders ranging from governments to corporations. To learn more, read Can AI be ethical? Why enterprises shouldn’t wait for AI regulations.
Governments are at the center of the data ethics debate in two important ways. First, governments “own” a massive amount of data about citizens, from health records to what books a citizen checked out of the library. Second, the government is a “regulator” of the corporate use of data collected online.
Governments are increasingly considering their regulatory responsibility. For instance, the European Union’s General Data Protection Regulation (GDPR) provides strict controls over cross-border data transmissions, gives citizens the right to be “forgotten,”3 and mandates that organizations, including government agencies, provide “data protection by design” and “data protection by default.”4
Similar to GDPR, the state of California’s Consumer Privacy Act aims for more stringent privacy requirements.5 Other planned global efforts include South Korea’s Ministry of Commerce, Industry, and Energy’s “Robots Ethics Charter” that provides manufacturers with ethical standards for programming the machines,6 and Canada and France’s international panel to rein in unethical use of AI.7
Many governments are formalizing their approach to algorithmic risks. The UK government, for example, has published a data ethics framework to clarify how public sector entities should treat data.8 Canada has developed an open-source Algorithmic Impact Assessment questionnaire that can assess and address risks associated with automated decision systems.9 The European Union, too, has been gathering comments from experts on its ethics guidelines for AI.10
Many big technology firms are also invested in addressing these challenges. IBM recently released the AI Fairness 360 open-source toolkit to check unwanted bias in data sets and machine learning models. Similar initiatives include Facebook’s Fairness Flow and Google’s What-If Tool.11 In another example, the Ethics and Algorithm Toolkit was developed collaboratively by the Center for Government Excellence, San Francisco’s DataSF program, the Civic Analytics Network, and Data Community DC.12
Industry consortia are developing standards and frameworks in their industries; examples include:
Read more about data and AI ethics in the Chief Data Officer Playbook.