How can CDOs manage algorithmic risks and data ethics | Deloitte Insights

Deloitte
  • Services

    What's New

    • Connecting for a resilient world

    • Deloitte 175

      Join us for a celebration of 175 years of making an impact that matters.

    • Climate & sustainability

      Guiding organizations to a more sustainable future.

    • Audit & Assurance

      • IASPlus
      • Assurance Services
      • Complex Accounting Assurance
      • Accounting Operations Assurance
      • Media & Advertising Assurance
      • Disruptive Events Assurance
      • Global Public Policy
    • Consulting

      • Strategy, Analytics and M&A
      • Customer and Marketing
      • Core Business Operations
      • Human Capital
      • Enterprise Technology & Performance
    • Risk Advisory

      • Strategic & Reputation Risk
      • Regulatory Risk
      • Financial Risk
      • Operational Risk
      • Cyber Risk
    • Financial Advisory

      • Mergers & Acquisitions
      • Restructuring Services
      • Deloitte Forensic
    • Legal

      • Legal Management Consulting
      • Legal Advisory Services
      • Legal Managed Services
      • Dbriefs Legal
      • Deloitte Legal Around the World
      • The Resilient General Counsel
    • Tax

      • Business Tax
      • Global Employer Services
      • Operations Transformation for Tax
      • Global Tax Reset
      • Tax Automation and AI
      • Tax Guide to COVID-19
    • Deloitte Private

  • Industries

    What's New

    • Connecting for a resilient world

    • Future of Mobility

      Learn how this new reality is coming together and what it will mean for you and your industry.

    • Platforms and Ecosystems

      Enabling the Digital Economy

    • Consumer

      • Automotive
      • Consumer Products
      • Retail, Wholesale & Distribution
      • Transportation, Hospitality & Services
    • Energy, Resources & Industrials

      • Industrial Products & Construction
      • Mining & Metals
      • Oil, Gas & Chemicals
      • Power, Utilities & Renewables
      • The Future of Energy
    • Financial Services

      • Banking & Capital Markets
      • Insurance
      • Investment Management
      • Real Estate
    • Government & Public Services

      • Civil Government
      • Defense, Security & Justice
      • Health & Social Care
      • International Donor Organizations
      • Transport
    • Life Sciences & Health Care

      • Health Care
      • Life Sciences
    • Technology, Media & Telecommunications

      • Technology
      • Telecommunications, Media & Entertainment
  • Insights

    Deloitte Insights

    What's New

    • Deloitte Insights app

      Go straight to smart with daily updates on your mobile device

    • Deloitte Review, issue 27

      Explore business recovery from COVID-19

    • Weekly economic update

      See what's happening this week and the impact on your business

    • By topic

      • AI & cognitive technologies
      • Analytics
      • Blockchain
      • Digital transformation
      • Diversity & inclusion
      • Economics
      • Human capital
      • Innovation
      • Leadership
      • Private companies
      • Risk management
      • Strategy
    • By sector

      • Automotive
      • Consumer products & retail
      • Financial services
      • Government & public services
      • Health care
      • Industrial products
      • Life sciences
      • Mining & metals
      • Oil, gas & chemicals
      • Power, utilities & renewables
      • Technology
      • Telecom, media & entertainment
      • Transportation & hospitality
    • Spotlight

      • Combating COVID-19
      • Deloitte Review
      • Economic weekly update
      • Future of mobility
      • Future of work
      • Industry 4.0
      • Internet of Things
      • Internet of Things
  • Careers

    What's New

    • Millennial Survey 2020

      Millennials and Gen Zs hold the key to creating a “better normal”

    • Alumni profiles

      Inspiring leaders

    • Job search

    • Experienced hires

    • Students

    • Life at Deloitte

    • Inclusion at Deloitte

    • Alumni

  • GLOBAL-EN Location: GLOBAL-English  
  • Contact us
  • GLOBAL-EN Location: GLOBAL-English  
  • Contact us

Welcome back

Still not a member? Join My Deloitte

How CDOs can manage algorithmic risks

by Nancy Albinson, Dilip Krishna, Yang Chu , William D. Eggers, Adira Levine
  • Save for later
  • Download
  • Share
    • Share on Facebook
    • Share on Twitter
    • Share on Linkedin
    • Share by email
Deloitte Insights
  • By topic
    By topic
    By topic
    • AI & cognitive technologies
    • Analytics
    • Blockchain
    • Digital transformation
    • Diversity & inclusion
    • Economics
    • Human capital
    • Innovation
    • Leadership
    • Private companies
    • Risk management
    • Strategy
  • By sector
    By sector
    By sector
    • Automotive
    • Consumer products & retail
    • Financial services
    • Government & public services
    • Health care
    • Industrial products
    • Life sciences
    • Mining & metals
    • Oil, gas & chemicals
    • Power, utilities & renewables
    • Technology
    • Telecom, media & entertainment
    • Transportation & hospitality
  • Spotlight
    Spotlight
    Spotlight
    • Combating COVID-19
    • Daily Executive Briefing
    • Deloitte Review
    • Economic weekly update
    • Future of mobility
    • Future of work
    • Industry 4.0
    • Internet of Things
    • Smart cities
    • GLOBAL-EN Location: GLOBAL-English  
    • Contact us
    07 June 2018

    How CDOs can manage algorithmic risks

    07 June 2018
    • Nancy Albinson United States
    • Dilip Krishna United States
    • Yang Chu United States
    • William D. Eggers United States
    • Adira Levine United States
    • See more See more See less
      • William D. Eggers United States
      • Adira Levine United States
    • Save for later
    • Download
    • Share
      • Share on Facebook
      • Share on Twitter
      • Share on Linkedin
      • Share by email
    • Understanding the risks
    • Taking the reins
    • Are you ready to manage algorithmic risks?

    Automated decision-making algorithms can have many uses in government—but they also entail risks such as bias, errors, and fraud. CDOs can help reduce these risks while capitalizing on these tools’ potential.

    The rise of advanced data analytics and cognitive technologies has led to an explosion in the use of complex algorithms across a wide range of industries and business functions, as well as in government. Whether deployed to predict potential crime hotspots or detect fraud and abuse in entitlement programs, these continually evolving sets of rules for automated or semi-automated decision-making can give government agencies new ways to achieve goals, accelerate performance, and increase effectiveness.

    However, algorithm-based tools—such as machine learning applications of artificial intelligence (AI)—also carry a potential downside. Even as many decisions enabled by algorithms have an increasingly profound impact, growing complexity can turn those algorithms into inscrutable black boxes. Although often enshrouded in an aura of objectivity and infallibility, algorithms can be vulnerable to a wide variety of risks, including accidental or intentional biases, errors, and fraud.

    Learn more

    ​Read the full CDO Playbook

    Create a custom PDF

    Learn more about the Beeck Center

    Subscribe to receive public sector content

    Chief data officers (CDOs), as the leaders of their organization’s data function, have an important role to play in helping governments harness this new capability while keeping the accompanying risks at bay.

    Understanding the risks

    Governments increasingly rely on data-driven insights powered by algorithms. Federal, state, and local governments are harnessing AI to solve challenges and expedite processes—ranging from answering citizenship questions through virtual assistants at the Department of Homeland Security to, in other instances, evaluating battlefield wounds with machine learning-based monitors.1 In the coming years, machine learning algorithms will also likely power countless new Internet of Things (IoT) applications in smart cities and smart military bases.

    While such change can be considered transformative and impressive, instances of algorithms going wrong have also increased, typically stemming from human biases, technical flaws, usage errors, or security vulnerabilities. For instance:

    • Social media algorithms have come under scrutiny for the way they may influence public opinion.2
    • During the 2016 Brexit referendum, algorithms received blame for the flash-crash of the British pound by six percent in two minutes.3
    • Investigations have found that an algorithm used by criminal justice systems across the United States to predict recidivism rates is biased against certain racial groups.4

    Typically, machine learning algorithms are first programmed and then trained using existing sample data. Once training concludes, algorithms can analyze new data, providing outputs based on what they learned during training and potentially any other data they’ve analyzed since. When it comes to algorithmic risks, three stages of that process can be especially vulnerable:

    • Data input: Problems can include biases in the data used for training the algorithm (see sidebar “The problem of algorithmic bias”). Other problems can arise from incomplete, outdated, or irrelevant input data; insufficiently large and diverse sample sizes; inappropriate data collection techniques; or a mismatch between training data and actual input.
    • Algorithm design: Algorithms can incorporate biased logic, flawed assumptions or judgments, structural inequities, inappropriate modeling techniques, or coding errors.
    • Output decisions: Users can interpret algorithmic output incorrectly, apply it inappropriately, or disregard its underlying assumptions.

    The problem of algorithmic bias

    Governments have used algorithms to make various decisions in criminal justice, human services, health care, and other fields. In theory, this should lead to unbiased and fair decisions. However, algorithms have at times been found to contain inherent biases, often as a result of the data used to train the algorithmic model. For government agencies, the problem of biased input data constitutes one of the biggest risks they face when using machine learning.

    While algorithmic bias can involve a number of factors other than race, allegations of racial bias have raised concerns about certain government applications of AI, particularly in the realm of criminal justice. Some court systems across the country have begun using algorithms to perform criminal risk assessments, an evaluation of the future criminal risk potential of criminal defendants. In nine US states, judges use the risk scores produced in these assessments as a factor in criminal sentencing. However, criminal risk scores have raised concerns over potential algorithmic bias and led to calls for greater examination.5

    In 2016, ProPublica conducted a statistical analysis of algorithm-based criminal risk assessments in Broward County, Florida. Controlling for defendant criminal history, gender, and age, the researchers concluded that black defendants were 77 percent more likely than others to be labeled at higher risk of committing a violent crime in the future.6 While the company that developed the tool denied the presence of bias, few of the criminal risk assessment tools used across the United States have undergone extensive, independent study and review. 7

    The immediate fallout from algorithmic risks can include inappropriate or even illegal decisions. And due to the speed at which algorithms operate, the consequences can quickly get out of hand. The potential long-term implications for government agencies include reputational, operational, technological, policy, and legal risks.

    Taking the reins

    To effectively manage algorithmic risks, traditional risk management frameworks should be modernized. Government CDOs should develop and adopt new approaches that are built on strong foundations of enterprise risk management and aligned with leading practices and regulatory requirements. Figure 1 depicts such an approach and its specific elements.

    A framework for algorithmic risk management

    Strategy, policy, and governance

    Create an algorithmic risk management strategy and governance structure to manage technical and cultural risks. This should include principles, ethics, policies, and standards; roles and responsibilities; control processes and procedures; and appropriate personnel selection and training. Providing transparency and processes to handle inquiries can also help organizations use algorithms responsibly.

    From a policy perspective, the idea that automated decisions should be “explainable” to those affected has recently gained prominence, although this is still a technically challenging proposition. In May 2018, the European Union began enforcing laws that require companies to be able to explain how their algorithms operate and reach decisions.8 Meanwhile, in December 2017, the New York City Council passed a law establishing an Automated Decision Systems Task Force to study the city’s use of algorithmic systems and provide recommendations. The body aims to provide guidance on increasing the transparency of algorithms affecting citizens and addressing suspected algorithmic bias.9

    Design, development, deployment, and use

    Develop processes and approaches aligned with the organization’s algorithmic risk management governance structure to address potential issues in the algorithmic life cycle from data selection, to algorithm design, to integration, to actual live use in production.

    This stage offers opportunities to build algorithms in a way that satisfies the growing emphasis on “explainability” mentioned earlier. Researchers have developed a number of techniques to construct algorithmic models in ways in which they can better explain themselves. One method involves creating generative adversarial networks (GANs), which set up a competing relationship between two algorithms within a machine learning model. In such models, one algorithm develops new data and the other assesses it, helping to determine whether the former operates as it should.10

    Another technique incorporates more direct relationships between certain variables into the algorithmic model to help avoid the emergence of a black box problem. Adding a monotonic layer to a model—in which changing one variable produces a predictable, quantifiable change in another—can increase clarity into the inner workings of complex algorithms.11

    Monitoring and testing

    Establish processes for assessing and overseeing algorithm data inputs, workings, and outputs, leveraging state-of-the-art tools as they become available. Seek objective reviews of algorithms by internal and external parties.

    Evaluators can not only assess model outcomes and impacts on a large scale, but also probe how specific factors affect a model’s individual outputs. For instance, researchers can examine specific areas of a model, methodically and automatically testing different combinations of inputs—such as by inserting or removing different parts of a phrase in turn—to help identify how various factors in the model affect outputs.12

    The Allegheny County approach

    Some governments have begun building transparency considerations into their use of algorithms and machine learning. Allegheny County, Pennsylvania provides one such example. In August 2016, the county implemented an algorithm-based tool—the Allegheny Family Screening Tool—to assess risks to children in suspected abuse or endangerment cases.13 The tool conducts a statistical analysis of more than 100 variables in order to assign a risk score of 1 to 20 to each incoming call reporting suspected child mistreatment.14 Call screeners at the Office of Children, Youth, and Families consult the algorithm’s risk assessment to help determine which cases to investigate. Studies suggest that the tool has enabled a double-digit reduction in the percentage of low-risk cases proposed for review as well as a smaller increase in the percentage of high-risk calls marked for investigation.15

    Like other risk assessment tools, the Allegheny Family Screening Tool has received criticism for potential inaccuracies or bias stemming from its underlying data and proxies. These concerns underscore the importance of the continued evolution of these tools. Yet the Allegheny County case also exemplifies potential practices to increase transparency. Developed by academics in the fields of social welfare and data analytics, the tool is county-owned and was implemented following an independent ethics review.16 County administrators discuss the tool in public sessions, and call screeners use it only to decide which calls to investigate rather than as a basis for more drastic measures. The county’s steps demonstrate one way that government agencies can help increase accountability around their use of algorithms.

    Are you ready to manage algorithmic risks?

    A good starting point for implementing an algorithmic risk management framework is to ask important questions about your agency’s preparedness to manage algorithmic risks. For example:

    • Where are algorithms deployed in your government organization or body, and how are they used?
    • What is the potential impact should those algorithms function improperly?
    • How well does senior management within your organization understand the need to manage algorithmic risks?
    • What is the governance structure for overseeing the risks emanating from algorithms?

    Adopting effective algorithmic risk management practices is not a journey that government agencies need to take alone. The growing awareness of algorithmic risks among researchers, consumer advocacy groups, lawmakers, regulators, and other stakeholders should contribute to a growing body of knowledge about algorithmic risks and, over time, risk management standards. In the meantime, it’s important for CDOs to evaluate their use of algorithms in high-risk and high-impact situations and implement leading practices to manage those risks intelligently so that their organizations can harness algorithms to enhance public value.

    The rapid proliferation of powerful algorithms in many facets of government operations is in full swing and will likely continue unabated for years to come. The use of intelligent algorithms offers a wide range of potential benefits to governments, including improved decision-making, strategic planning, operational efficiency, and even risk management. But in order to realize these benefits, organizations will likely need to recognize and manage the inherent risks associated with the design, implementation, and use of algorithms—risks that could increase unless governments invest thoughtfully in algorithmic risk management capabilities.

    Authors

    Nancy Albinson is a managing director with Deloitte & Touche LLP and leader of Deloitte Risk & Financial Advisory’s innovation program. She is based in Parsippany, NJ.

    Dilip Krishna is the chief technology officer and a managing director with the Regulatory & Operational Risk practice at Deloitte & Touche LLP. He is based in New York City.

    Yang Chu is a senior manager at Deloitte & Touche LLP. She is based in San Francisco, CA.

    William D. Eggers is the executive director of Deloitte’s Center for Government Insights, where he is responsible for the firm’s public sector thought leadership. He is based in Arlington, VA.

    Adira Levine is a strategy consultant at Deloitte Consulting LLP, where her work is primarily aligned to the public sector. She is based in Arlington, VA.

    Acknowledgments

    About the Beeck Center for Social Impact + Innovation

    The Beeck Center for Social Impact + Innovation at Georgetown University engages global leaders to drive social change at scale. Through our research, education, and convenings, we provide innovative tools that leverage the power of capital, data, technology, and policy to improve lives. We embrace a cross-disciplinary approach to building solutions at scale.

     

    Cover image by: Lucie Rice

    Endnotes
      1. William D. Eggers, David Schatsky, and Peter Viechnicki, AI-augmented government: Using cognitive technologies to redesign public sector work, Deloitte University Press, April 26, 2017. View in article

      2. Dilip Krishna, Nancy Albinson, and Yang Chu, “Managing algorithmic risks,” CIO Journal, Wall Street Journal, October 25, 2017. View in article

      3. Jamie Condliffe, “Algorithms probably caused a flash crash of the British pound,” MIT Technology Review, October 7, 2016. View in article

      4. Issie Lapowsky, “Crime-predicting algorithms may not fare much better than untrained humans,” Wired, January 17, 2018. View in article

      5. Julia Angwin et al., “Machine bias,” ProPublica, May 23, 2016. View in article

      6. Ibid. View in article

      7. Ibid. View in article

      8. Bahar Gholipour, “We need to open the AI black box before it’s too late,” Futurism, January 18, 2018. View in article

      9. Julia Powles, “New York City’s bold, flawed attempt to make algorithms accountable,” New Yorker, December 20, 2017. View in article

      10. Deep Learning for Java, “GAN: A beginner’s guide to generative adversarial networks,” accessed May 3, 2018. View in article

      11. Paul Voosen, “How AI detectives are cracking open the black box of deep learning,” Science, July 6, 2017. View in article

      12. The Local Interpretable Model-Agnostic Explanations (LIME) and other techniques modifying and building upon this help look at local instances of a model prediction. (Patrick Hall et al., “Machine learning interpretability with H2O driverless AI,” H2O.ai, April 2018; Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin, “Why should I trust you? Explaining the predictions of any classifier,” arXiv.org, February 16, 2016; Voosen, “How AI detectives are cracking open the black box of deep learning.”) View in article

      13. Dan Hurley, “Can an algorithm tell when kids are in danger?,” New York Times Magazine, January 2, 2018. View in article

      14. Virginia Eubanks, “A child abuse prediction model fails poor families,” Wired, January 15, 2018. View in article

      15. Hurley, “Can an algorithm tell when kids are in danger?” View in article

      16. Ibid. View in article

    Show moreShow lessShow less

    Topics in this article

    C-suite , Government , Information Technology

    Deloitte Center for Government Insights

    View

    Download Subscribe

    The chief data officer in government: A playbook for CDOs

    img Trending

    Data tokenization for government

    Article 1 year ago
    img Trending

    Data as an asset

    Article 1 year ago
    img Trending

    Pump your own data

    Article 1 year ago
    img Trending

    Managing data ethics

    Article 1 year ago

    Read more from the playbook

    • CDOs, health data, and the Open Science movement Article2 years ago
    • Implementing the DATA Act Article2 years ago
    • How CDOs can promote machine learning in government Article2 years ago
    • How CDOs can overcome obstacles to open data-sharing Article2 years ago
    • Connecting data to residents through data storytelling Article2 years ago
    • Turning public data to the public good Article2 years ago
    Nancy Albinson

    Nancy Albinson

    Managing Director | Deloitte & Touche LLP

    As a Deloitte & Touche LLP managing director, Nancy is Deloitte Risk & Financial Advisory and Global Risk Advisory Innovation leader. She focuses on innovation strategy, sensing of emerging trends, experimentation, and efforts to invest in and scale new or adjacent solutions. She leads a program focused on making strategic investments in emerging megatrends and technologies and engaged in efforts to transform core offerings with digital technologies. She leads talent development for Risk and Financial Advisory to help shape the workforce of the future. Thought leadership Nancy has authored includes: Future of risk in the digital era Managing algorithmic risks How chief data officers can manage algorithmic risks The future of risk: New games, new rules

    • nalbinson@deloitte.com
    • +1 973 602 4523
    Adira Levine

    Adira Levine

    Strategy Consultant | Deloitte Consulting LLP

    Adira Levine is a strategy consultant at Deloitte Consulting LLP, where her work is primarily aligned to the public sector. She holds a Master in Public Policy degree and has presented on and contributed to reports about technology policy and artificial intelligence. Her work spans issues pertaining to innovation, technological change, security, and risk mitigation.

    • adlevine@deloitte.com
    • +1 571 227 8058
    William D. Eggers

    William D. Eggers

    Executive Director

    Bill is the executive director of Deloitte’s Center for Government Insights where he is responsible for the firm’s public sector thought leadership. His new book is Delivering on Digital: The Innovators and Technologies that are Transforming Government (Deloitte Insights, 2016). His eight other books include The Solution Revolution: How Government, Business, and Social Enterprises are Teaming up to Solve Society’s Biggest Problems (Harvard Business Review Press 2013). The book, which The Wall Street Journal calls “pulsating with new ideas about civic and business and philanthropic engagement,” was named to ten best books of the year lists. His other books include The Washington Post best seller If We Can Put a Man on the Moon: Getting Big Things Done in Government (Harvard Business Press, 2009), Governing by Network (Brookings, 2004), and The Public Innovator’s Playbook (Deloitte Research 2009). He coined the term Government 2.0 in a book by the same name. His commentary has appeared in dozens of major media outlets including the New York Times, Wall Street Journal, and the Chicago Tribune.

    • weggers@deloitte.com
    • +1 571 882 6585
    Yang Chu

    Yang Chu

    Senior Manager | Deloitte & Touche LLP

    Yang is a senior manager at Deloitte & Touche LLP. She is a specialist in strategic, financial, operational, technological, and regulatory risk and focuses on exploring emerging trends for opportunities and threats for clients and for Deloitte.

    • yangchu@deloitte.com
    • +1 415 783 4060
    Dilip Krishna

    Dilip Krishna

    CTO | Deloitte Risk & Financial Advisory

    Dilip is the chief technology officer and a managing director with the Regulatory & Operational Risk practice at Deloitte & Touche LLP. He focuses on risk analytics and information technology, working primarily with financial services clients on data analytics problems relating to topics including stress-testing, capital management, risk data aggregation, and compliance including CCAR, Basel, and EPS.

    • dkrishna@deloitte.com
    • +1 212 436 7939

    Share article highlights

    See something interesting? Simply select text and choose how to share it:

    Email a customized link that shows your highlighted text.
    Copy a customized link that shows your highlighted text.
    Copy your highlighted text.

    How CDOs can manage algorithmic risks has been saved

    How CDOs can manage algorithmic risks has been removed

    An Article Titled How CDOs can manage algorithmic risks already exists in Saved items

     
    Forgot password

    OR

    Connect Accounts

    Connect your social accounts

    This is the first time you have logged in with a social network.

    You have previously logged in with a different account. To link your accounts, please re-authenticate.

    Log in with an existing social network:

    To connect with your existing account, please enter your password:

    OR

    Log in with an existing site account:

    To connect with your existing account, please enter your password:

    Forgot password

    Subscribe

    to receive more business insights, analysis, and perspectives from Deloitte Insights
    ✓ Link copied to clipboard
    • Contact us
    • Submit RFP
    • Job search
    Follow Deloitte Insights:
    Office locations
    GLOBAL-EN Location: GLOBAL-English  
    About Deloitte
    • Newsroom
    • Social media
    • Leadership blog
    • Press releases
    • Submit an RFP
    • Job Search
    • Global office directory
    • Contact us
    • About Deloitte
    Services
    • Audit & Assurance
    • Consulting
    • Risk Advisory
    • Financial Advisory
    • Legal
    • Tax
    • Deloitte Private
    Industries
    • Consumer
    • Energy, Resources & Industrials
    • Financial Services
    • Government & Public Services
    • Life Sciences & Health Care
    • Technology, Media & Telecommunications
    Careers
    • Job search
    • Experienced hires
    • Students
    • Life at Deloitte
    • Inclusion at Deloitte
    • Alumni
    • About the Deloitte organization
    • Terms of use
    • Cookies
    • Privacy
    • Privacy Shield
    • Accessibility statement

     

    © 2021. For information, contact Deloitte Touche Tohmatsu Limited.

     

    Deloitte refers to one or more of Deloitte Touche Tohmatsu Limited (“DTTL”), its global network of member firms, and their related entities (collectively, the “Deloitte organization”). DTTL (also referred to as “Deloitte Global”) and each of its member firms and related entities are legally separate and independent entities, which cannot obligate or bind each other in respect of third parties. DTTL and each DTTL member firm and related entity is liable only for its own acts and omissions, and not those of each other. DTTL does not provide services to clients. Please see www.deloitte.com/about to learn more.