Viewing offline content

Limited functionality available

Dismiss
United States
  • Services

    What's New

    • Register for Dbriefs webcasts

    • Unlimited Reality™

      Metaverse solutions that drive value

    • Sustainability, Climate & Equity

      Cultivating a sustainable and prosperous future

    • Tax

      • Tax Operate
      • Tax Legislation
      • Tax Technology Consulting
      • Global Employer Services
      • Legal Business Services
      • Tax Services
    • Consulting

      • Core Business Operations
      • Customer & Marketing
      • Enterprise Technology & Performance
      • Human Capital
      • Strategy & Analytics
    • Audit & Assurance

      • Audit Innovation
      • Accounting Standards
      • Accounting Events & Transactions
    • Deloitte Private

    • M&A and Restructuring

    • Risk & Financial Advisory

      • Accounting & Internal Controls
      • Cyber & Strategic Risk
      • Regulatory & Legal
      • Transactions and M&A
    • AI & Analytics

    • Cloud

    • Diversity, Equity & Inclusion

  • Industries

    What's New

    • The Ripple Effect

      Real-world client stories of purpose and impact

    • Register for Dbriefs webcasts

    • Industry Outlooks

      Key opportunities, trends, and challenges

    • Consumer

      • Automotive
      • Consumer Products
      • Retail, Wholesale & Distribution
      • Transportation, Hospitality & Services
    • Energy, Resources & Industrials

      • Industrial Products & Construction
      • Power, Utilities & Renewables
      • Energy & Chemicals
      • Mining & Metals
    • Financial Services

      • Banking & Capital Markets
      • Insurance
      • Investment Management
      • Real Estate
    • Government & Public Services

      • Defense, Security & Justice
      • Federal health
      • Civil
      • State & Local
      • Higher Education
    • Life Sciences & Health Care

      • Health Care
      • Life Sciences
    • Technology, Media & Telecommunications

      • Technology
      • Telecommunications, Media & Entertainment
  • Insights

    Deloitte Insights

    What's New

    • Deloitte Insights Magazine

      Explore the latest issue now

    • Deloitte Insights app

      Go straight to smart with daily updates on your mobile device

    • Weekly economic update

      See what's happening this week and the impact on your business

    • Strategy

      • Business Strategy & Growth
      • Digital Transformation
      • Governance & Board
      • Innovation
      • Marketing & Sales
      • Private Enterprise
    • Economy & Society

      • Economy
      • Environmental, Social, & Governance
      • Health Equity
      • Trust
      • Mobility
    • Organization

      • Operations
      • Finance & Tax
      • Risk & Regulation
      • Supply Chain
      • Smart Manufacturing
    • People

      • Leadership
      • Talent & Work
      • Diversity, Equity, & Inclusion
    • Technology

      • Data & Analytics
      • Emerging Technologies
      • Technology Management
    • Industries

      • Consumer
      • Energy, Resources, & Industrials
      • Financial Services
      • Government & Public Services
      • Life Sciences & Health Care
      • Technology, Media, & Telecommunications
    • Spotlight

      • Deloitte Insights Magazine
      • Press Room Podcasts
      • Weekly Economic Update
      • COVID-19
      • Resilience
      • Top 10 reading guide
  • Careers

    What's New

    • Our Purpose

      Exceptional organizations are led by a purpose. At Deloitte, our purpose is to make an impact that matters by creating trust and confidence in a more equitable society.

    • Day in the Life: Our hybrid workplace model

      See how we connect, collaborate, and drive impact across various locations.

    • The Deloitte University Experience

      Explore Deloitte University like never before through a cinematic movie trailer and films of popular locations throughout Deloitte University.

    • Careers

      • Audit & Assurance
      • Consulting
      • Risk & Financial Advisory
      • Tax
      • Internal Services
      • US Delivery Center
    • Students

      • Undergraduate
      • Advanced Degree
      • Internships
    • Experienced Professionals

      • Additional Opportunities
      • Veterans
      • Industries
      • Executives
    • Job Search

      • Entry Level Jobs
      • Experienced Professional Jobs
      • Recruiting Tips
      • Explore Your Fit
      • Labor Condition Applications
    • Life at Deloitte

      • Life at Deloitte Blog
      • Meet Our People
      • Diversity, Equity, & Inclusion
      • Corporate Citizenship
      • Leadership Development
      • Empowered Well-Being
      • Deloitte University
    • Alumni Relations

      • Update Your Information
      • Events
      • Career Development Support
      • Marketplace Jobs Dashboard
      • Alumni Resources
  • US-EN Location: United States-English  
  • Contact us
  • US-EN Location: United States-English  
  • Contact us
    • Dashboard
    • Saved items
    • Content feed
    • Subscriptions
    • Profile/Interests
    • Account settings

Welcome back

Still not a member? Join My Deloitte

How CDOs can manage algorithmic risks

by Nancy Albinson, Dilip Krishna , Yang Chu , William D. Eggers, Adira Levine
  • Save for later
  • Download
  • Share
    • Share on Facebook
    • Share on Twitter
    • Share on Linkedin
    • Share by email
Deloitte Insights
  • Strategy
    Strategy
    Strategy
    • Business Strategy & Growth
    • Digital Transformation
    • Governance & Board
    • Innovation
    • Marketing & Sales
    • Private Enterprise
  • Economy & Society
    Economy & Society
    Economy & Society
    • Economy
    • Environmental, Social, & Governance
    • Health Equity
    • Trust
    • Mobility
  • Organization
    Organization
    Organization
    • Operations
    • Finance & Tax
    • Risk & Regulation
    • Supply Chain
    • Smart Manufacturing
  • People
    People
    People
    • Leadership
    • Talent & Work
    • Diversity, Equity, & Inclusion
  • Technology
    Technology
    Technology
    • Data & Analytics
    • Emerging Technologies
    • Technology Management
  • Industries
    Industries
    Industries
    • Consumer
    • Energy, Resources, & Industrials
    • Financial Services
    • Government & Public Services
    • Life Sciences & Health Care
    • Tech, Media, & Telecom
  • Spotlight
    Spotlight
    Spotlight
    • Deloitte Insights Magazine
    • Press Room Podcasts
    • Weekly Economic Update
    • COVID-19
    • Resilience
    • Top 10 reading guide
    • US-EN Location: United States-English  
    • Contact us
      • Dashboard
      • Saved items
      • Content feed
      • Subscriptions
      • Profile/Interests
      • Account settings
    07 June 2018

    How CDOs can manage algorithmic risks

    07 June 2018
    • Nancy Albinson United States
    • Dilip Krishna United States
    • Yang Chu United States
    • William D. Eggers United States
    • Adira Levine United States
    • See more See more See less
      • William D. Eggers United States
      • Adira Levine United States
    • Save for later
    • Download
    • Share
      • Share on Facebook
      • Share on Twitter
      • Share on Linkedin
      • Share by email
    • Understanding the risks
    • Taking the reins
    • Are you ready to manage algorithmic risks?

    Automated decision-making algorithms can have many uses in government—but they also entail risks such as bias, errors, and fraud. CDOs can help reduce these risks while capitalizing on these tools’ potential.

    The rise of advanced data analytics and cognitive technologies has led to an explosion in the use of complex algorithms across a wide range of industries and business functions, as well as in government. Whether deployed to predict potential crime hotspots or detect fraud and abuse in entitlement programs, these continually evolving sets of rules for automated or semi-automated decision-making can give government agencies new ways to achieve goals, accelerate performance, and increase effectiveness.

    However, algorithm-based tools—such as machine learning applications of artificial intelligence (AI)—also carry a potential downside. Even as many decisions enabled by algorithms have an increasingly profound impact, growing complexity can turn those algorithms into inscrutable black boxes. Although often enshrouded in an aura of objectivity and infallibility, algorithms can be vulnerable to a wide variety of risks, including accidental or intentional biases, errors, and fraud.

    Learn more

    ​Read the full CDO Playbook

    Create a custom PDF

    Learn more about the Beeck Center

    Subscribe to receive public sector content

    Chief data officers (CDOs), as the leaders of their organization’s data function, have an important role to play in helping governments harness this new capability while keeping the accompanying risks at bay.

    Understanding the risks

    Governments increasingly rely on data-driven insights powered by algorithms. Federal, state, and local governments are harnessing AI to solve challenges and expedite processes—ranging from answering citizenship questions through virtual assistants at the Department of Homeland Security to, in other instances, evaluating battlefield wounds with machine learning-based monitors.1 In the coming years, machine learning algorithms will also likely power countless new Internet of Things (IoT) applications in smart cities and smart military bases.

    While such change can be considered transformative and impressive, instances of algorithms going wrong have also increased, typically stemming from human biases, technical flaws, usage errors, or security vulnerabilities. For instance:

    • Social media algorithms have come under scrutiny for the way they may influence public opinion.2
    • During the 2016 Brexit referendum, algorithms received blame for the flash-crash of the British pound by six percent in two minutes.3
    • Investigations have found that an algorithm used by criminal justice systems across the United States to predict recidivism rates is biased against certain racial groups.4

    Typically, machine learning algorithms are first programmed and then trained using existing sample data. Once training concludes, algorithms can analyze new data, providing outputs based on what they learned during training and potentially any other data they’ve analyzed since. When it comes to algorithmic risks, three stages of that process can be especially vulnerable:

    • Data input: Problems can include biases in the data used for training the algorithm (see sidebar “The problem of algorithmic bias”). Other problems can arise from incomplete, outdated, or irrelevant input data; insufficiently large and diverse sample sizes; inappropriate data collection techniques; or a mismatch between training data and actual input.
    • Algorithm design: Algorithms can incorporate biased logic, flawed assumptions or judgments, structural inequities, inappropriate modeling techniques, or coding errors.
    • Output decisions: Users can interpret algorithmic output incorrectly, apply it inappropriately, or disregard its underlying assumptions.

    The problem of algorithmic bias

    Governments have used algorithms to make various decisions in criminal justice, human services, health care, and other fields. In theory, this should lead to unbiased and fair decisions. However, algorithms have at times been found to contain inherent biases, often as a result of the data used to train the algorithmic model. For government agencies, the problem of biased input data constitutes one of the biggest risks they face when using machine learning.

    While algorithmic bias can involve a number of factors other than race, allegations of racial bias have raised concerns about certain government applications of AI, particularly in the realm of criminal justice. Some court systems across the country have begun using algorithms to perform criminal risk assessments, an evaluation of the future criminal risk potential of criminal defendants. In nine US states, judges use the risk scores produced in these assessments as a factor in criminal sentencing. However, criminal risk scores have raised concerns over potential algorithmic bias and led to calls for greater examination.5

    In 2016, ProPublica conducted a statistical analysis of algorithm-based criminal risk assessments in Broward County, Florida. Controlling for defendant criminal history, gender, and age, the researchers concluded that black defendants were 77 percent more likely than others to be labeled at higher risk of committing a violent crime in the future.6 While the company that developed the tool denied the presence of bias, few of the criminal risk assessment tools used across the United States have undergone extensive, independent study and review. 7

    The immediate fallout from algorithmic risks can include inappropriate or even illegal decisions. And due to the speed at which algorithms operate, the consequences can quickly get out of hand. The potential long-term implications for government agencies include reputational, operational, technological, policy, and legal risks.

    Taking the reins

    To effectively manage algorithmic risks, traditional risk management frameworks should be modernized. Government CDOs should develop and adopt new approaches that are built on strong foundations of enterprise risk management and aligned with leading practices and regulatory requirements. Figure 1 depicts such an approach and its specific elements.

    A framework for algorithmic risk management

    Strategy, policy, and governance

    Create an algorithmic risk management strategy and governance structure to manage technical and cultural risks. This should include principles, ethics, policies, and standards; roles and responsibilities; control processes and procedures; and appropriate personnel selection and training. Providing transparency and processes to handle inquiries can also help organizations use algorithms responsibly.

    From a policy perspective, the idea that automated decisions should be “explainable” to those affected has recently gained prominence, although this is still a technically challenging proposition. In May 2018, the European Union began enforcing laws that require companies to be able to explain how their algorithms operate and reach decisions.8 Meanwhile, in December 2017, the New York City Council passed a law establishing an Automated Decision Systems Task Force to study the city’s use of algorithmic systems and provide recommendations. The body aims to provide guidance on increasing the transparency of algorithms affecting citizens and addressing suspected algorithmic bias.9

    Design, development, deployment, and use

    Develop processes and approaches aligned with the organization’s algorithmic risk management governance structure to address potential issues in the algorithmic life cycle from data selection, to algorithm design, to integration, to actual live use in production.

    This stage offers opportunities to build algorithms in a way that satisfies the growing emphasis on “explainability” mentioned earlier. Researchers have developed a number of techniques to construct algorithmic models in ways in which they can better explain themselves. One method involves creating generative adversarial networks (GANs), which set up a competing relationship between two algorithms within a machine learning model. In such models, one algorithm develops new data and the other assesses it, helping to determine whether the former operates as it should.10

    Another technique incorporates more direct relationships between certain variables into the algorithmic model to help avoid the emergence of a black box problem. Adding a monotonic layer to a model—in which changing one variable produces a predictable, quantifiable change in another—can increase clarity into the inner workings of complex algorithms.11

    Monitoring and testing

    Establish processes for assessing and overseeing algorithm data inputs, workings, and outputs, leveraging state-of-the-art tools as they become available. Seek objective reviews of algorithms by internal and external parties.

    Evaluators can not only assess model outcomes and impacts on a large scale, but also probe how specific factors affect a model’s individual outputs. For instance, researchers can examine specific areas of a model, methodically and automatically testing different combinations of inputs—such as by inserting or removing different parts of a phrase in turn—to help identify how various factors in the model affect outputs.12

    The Allegheny County approach

    Some governments have begun building transparency considerations into their use of algorithms and machine learning. Allegheny County, Pennsylvania provides one such example. In August 2016, the county implemented an algorithm-based tool—the Allegheny Family Screening Tool—to assess risks to children in suspected abuse or endangerment cases.13 The tool conducts a statistical analysis of more than 100 variables in order to assign a risk score of 1 to 20 to each incoming call reporting suspected child mistreatment.14 Call screeners at the Office of Children, Youth, and Families consult the algorithm’s risk assessment to help determine which cases to investigate. Studies suggest that the tool has enabled a double-digit reduction in the percentage of low-risk cases proposed for review as well as a smaller increase in the percentage of high-risk calls marked for investigation.15

    Like other risk assessment tools, the Allegheny Family Screening Tool has received criticism for potential inaccuracies or bias stemming from its underlying data and proxies. These concerns underscore the importance of the continued evolution of these tools. Yet the Allegheny County case also exemplifies potential practices to increase transparency. Developed by academics in the fields of social welfare and data analytics, the tool is county-owned and was implemented following an independent ethics review.16 County administrators discuss the tool in public sessions, and call screeners use it only to decide which calls to investigate rather than as a basis for more drastic measures. The county’s steps demonstrate one way that government agencies can help increase accountability around their use of algorithms.

    Are you ready to manage algorithmic risks?

    A good starting point for implementing an algorithmic risk management framework is to ask important questions about your agency’s preparedness to manage algorithmic risks. For example:

    • Where are algorithms deployed in your government organization or body, and how are they used?
    • What is the potential impact should those algorithms function improperly?
    • How well does senior management within your organization understand the need to manage algorithmic risks?
    • What is the governance structure for overseeing the risks emanating from algorithms?

    Adopting effective algorithmic risk management practices is not a journey that government agencies need to take alone. The growing awareness of algorithmic risks among researchers, consumer advocacy groups, lawmakers, regulators, and other stakeholders should contribute to a growing body of knowledge about algorithmic risks and, over time, risk management standards. In the meantime, it’s important for CDOs to evaluate their use of algorithms in high-risk and high-impact situations and implement leading practices to manage those risks intelligently so that their organizations can harness algorithms to enhance public value.

    The rapid proliferation of powerful algorithms in many facets of government operations is in full swing and will likely continue unabated for years to come. The use of intelligent algorithms offers a wide range of potential benefits to governments, including improved decision-making, strategic planning, operational efficiency, and even risk management. But in order to realize these benefits, organizations will likely need to recognize and manage the inherent risks associated with the design, implementation, and use of algorithms—risks that could increase unless governments invest thoughtfully in algorithmic risk management capabilities.

    Authors

    Nancy Albinson is a managing director with Deloitte & Touche LLP and leader of Deloitte Risk & Financial Advisory’s innovation program. She is based in Parsippany, NJ.

    Dilip Krishna is the chief technology officer and a managing director with the Regulatory & Operational Risk practice at Deloitte & Touche LLP. He is based in New York City.

    Yang Chu is a senior manager at Deloitte & Touche LLP. She is based in San Francisco, CA.

    William D. Eggers is the executive director of Deloitte’s Center for Government Insights, where he is responsible for the firm’s public sector thought leadership. He is based in Arlington, VA.

    Adira Levine is a strategy consultant at Deloitte Consulting LLP, where her work is primarily aligned to the public sector. She is based in Arlington, VA.

    Acknowledgments

    About the Beeck Center for Social Impact + Innovation

    The Beeck Center for Social Impact + Innovation at Georgetown University engages global leaders to drive social change at scale. Through our research, education, and convenings, we provide innovative tools that leverage the power of capital, data, technology, and policy to improve lives. We embrace a cross-disciplinary approach to building solutions at scale.

     

    Cover image by: Lucie Rice

    Endnotes
      1. William D. Eggers, David Schatsky, and Peter Viechnicki, AI-augmented government: Using cognitive technologies to redesign public sector work, Deloitte University Press, April 26, 2017. View in article

      2. Dilip Krishna, Nancy Albinson, and Yang Chu, “Managing algorithmic risks,” CIO Journal, Wall Street Journal, October 25, 2017. View in article

      3. Jamie Condliffe, “Algorithms probably caused a flash crash of the British pound,” MIT Technology Review, October 7, 2016. View in article

      4. Issie Lapowsky, “Crime-predicting algorithms may not fare much better than untrained humans,” Wired, January 17, 2018. View in article

      5. Julia Angwin et al., “Machine bias,” ProPublica, May 23, 2016. View in article

      6. Ibid. View in article

      7. Ibid. View in article

      8. Bahar Gholipour, “We need to open the AI black box before it’s too late,” Futurism, January 18, 2018. View in article

      9. Julia Powles, “New York City’s bold, flawed attempt to make algorithms accountable,” New Yorker, December 20, 2017. View in article

      10. Deep Learning for Java, “GAN: A beginner’s guide to generative adversarial networks,” accessed May 3, 2018. View in article

      11. Paul Voosen, “How AI detectives are cracking open the black box of deep learning,” Science, July 6, 2017. View in article

      12. The Local Interpretable Model-Agnostic Explanations (LIME) and other techniques modifying and building upon this help look at local instances of a model prediction. (Patrick Hall et al., “Machine learning interpretability with H2O driverless AI,” H2O.ai, April 2018; Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin, “Why should I trust you? Explaining the predictions of any classifier,” arXiv.org, February 16, 2016; Voosen, “How AI detectives are cracking open the black box of deep learning.”) View in article

      13. Dan Hurley, “Can an algorithm tell when kids are in danger?,” New York Times Magazine, January 2, 2018. View in article

      14. Virginia Eubanks, “A child abuse prediction model fails poor families,” Wired, January 15, 2018. View in article

      15. Hurley, “Can an algorithm tell when kids are in danger?” View in article

      16. Ibid. View in article

    Show moreShow less

    Topics in this article

    C-suite , Government , Information Technology

    Deloitte Center for Government Insights

    View

    Download Subscribe

    The chief data officer in government: A playbook for CDOs

    img Trending

    Data tokenization for government

    Article 3 years ago
    img Trending

    Data as an asset

    Article 4 years ago
    img Trending

    Pump your own data

    Article 4 years ago
    img Trending

    Managing data ethics

    Article 4 years ago

    Read more from the playbook

    • CDOs, health data, and the Open Science movement Article4 years ago
    • Implementing the DATA Act Article4 years ago
    • How CDOs can promote machine learning in government Article4 years ago
    • How CDOs can overcome obstacles to open data-sharing Article4 years ago
    • Connecting data to residents through data storytelling Article4 years ago
    • Turning public data to the public good Article4 years ago
    Nancy Albinson

    Nancy Albinson

    Managing Director | Deloitte & Touche LLP

    As a Deloitte & Touche LLP managing director, Nancy is Deloitte Risk & Financial Advisory Innovation leader. She focuses on innovation strategy, sensing of emerging trends, experimentation, and efforts to invest in and scale new or adjacent solutions. She leads a program focused on making strategic investments in emerging megatrends and technologies and engaged in efforts to transform core offerings with digital technologies. She leads talent development for Risk and Financial Advisory to help shape the workforce of the future. Thought leadership Nancy has authored includes: Reality remade: Rethink the role of risk in a postpandemic world Future of risk in the digital era Managing algorithmic risks How chief data officers can manage algorithmic risks The future of risk: New games, new rules

    • nalbinson@deloitte.com
    • +1 973 602 4523
    Dilip Krishna

    Dilip Krishna

    Managing Director | Deloitte Consulting LLP

    Dilip is a managing director in Deloitte Consulting LLP. He is the Global CTO for Sustainability & Climate and responsible for setting Deloitte’s technology vision as well as engineering solutions to serve clients. Dilip is also the chief product officer and Global leader of CortexAI™, Deloitte’s next-gen AI platform. In these roles, he brings his more than 25 years of technology, analytics, consulting and industry experience to helping clients effectively apply new and disruptive technology to effect transformative business changes using big data and advanced analytics, AI, cloud and cyber security. Prior to Deloitte, Dilip served as a partner at Teradata Corporation where he oversaw the Professional Services practice in financial services, working closely with large banking clients to implement some of its largest analytics systems. He has worked with government agencies in regulation and transparency, including being called upon to offer expert testimony to the United States Congress. In addition, Dilip has been widely published and quoted in the areas of robotics, AI, risk information and risk architecture. He is a senior editor and contributing author of the Handbook of Financial Data and Risk Information (Cambridge University Press).

    • dkrishna@deloitte.com
    • +1 713 331 4558
    Yang Chu

    Yang Chu

    Senior Manager | Deloitte & Touche LLP

    Yang is a senior manager at Deloitte & Touche LLP. She is a specialist in strategic, financial, operational, technological, and regulatory risk and focuses on exploring emerging trends for opportunities and threats for clients and for Deloitte.

    • yangchu@deloitte.com
    • +1 415 783 4060
    William D. Eggers

    William D. Eggers

    Executive Director, Deloitte’s Center for Government Insights

    William Eggers is the executive director of Deloitte’s Center for Government Insights, where he is responsible for the firm’s public sector thought leadership. His most recent book is Delivering on Digital: The Innovators and Technologies that Are Transforming Government (Deloitte University Press, 2016). His other books include The Solution Revolution, the Washington Post best-seller If We Can Put a Man on the Moon, and Governing by Network. He coined the term Government 2.0 in a book by the same name. His commentary has appeared in dozens of major media outlets including the New York Times, the Wall Street Journal, and the Washington Post.

    • weggers@deloitte.com
    • +1 571 882 6585
    Adira Levine

    Adira Levine

    Strategy Consultant | Deloitte Consulting LLP

    Adira Levine is a strategy consultant at Deloitte Consulting LLP, where her work is primarily aligned to the public sector. She holds a Master in Public Policy degree and has presented on and contributed to reports about technology policy and artificial intelligence. Her work spans issues pertaining to innovation, technological change, security, and risk mitigation.

    • adlevine@deloitte.com
    • +1 571 227 8058

    Share article highlights

    See something interesting? Simply select text and choose how to share it:

    Email a customized link that shows your highlighted text.
    Copy a customized link that shows your highlighted text.
    Copy your highlighted text.

    How CDOs can manage algorithmic risks has been saved

    How CDOs can manage algorithmic risks has been removed

    An Article Titled How CDOs can manage algorithmic risks already exists in Saved items

    Invalid special characters found 
    Forgot password

    To stay logged in, change your functional cookie settings.

    OR

    Social login not available on Microsoft Edge browser at this time.

    Connect Accounts

    Connect your social accounts

    This is the first time you have logged in with a social network.

    You have previously logged in with a different account. To link your accounts, please re-authenticate.

    Log in with an existing social network:

    To connect with your existing account, please enter your password:

    OR

    Log in with an existing site account:

    To connect with your existing account, please enter your password:

    Forgot password

    Subscribe

    to receive more business insights, analysis, and perspectives from Deloitte Insights
    ✓ Link copied to clipboard
    • Contact us
    • Search jobs
    • Submit RFP
    • Subscribe to Deloitte Insights
    Follow Deloitte Insights:
    Global office directory US office locations
    US-EN Location: United States-English  
    About Deloitte
    • About Deloitte
    • Client stories
    • My Deloitte
    • Deloitte Insights
    • Email subscriptions
    • Press releases
    • Submit RFP
    • US office locations
    • Alumni
    • Global office directory
    • Newsroom
    • Dbriefs webcasts
    • Contact us
    Services
    • Tax
    • Consulting
    • Audit & Assurance
    • Deloitte Private
    • M&A and Restructuring
    • Risk & Financial Advisory
    • AI & Analytics
    • Cloud
    • Diversity, Equity & Inclusion
    Industries
    • Consumer
    • Energy, Resources & Industrials
    • Financial Services
    • Government & Public Services
    • Life Sciences & Health Care
    • Technology, Media & Telecommunications
    Careers
    • Careers
    • Students
    • Experienced Professionals
    • Job Search
    • Life at Deloitte
    • Alumni Relations
    • About Deloitte
    • Terms of Use
    • Privacy
    • Privacy Shield
    • Cookies
    • Cookie Settings
    • Legal Information for Job Seekers
    • Labor Condition Applications
    • Do Not Sell or Share My Personal Information

    © 2023. See Terms of Use for more information.

    Deloitte refers to one or more of Deloitte Touche Tohmatsu Limited, a UK private company limited by guarantee ("DTTL"), its network of member firms, and their related entities. DTTL and each of its member firms are legally separate and independent entities. DTTL (also referred to as "Deloitte Global") does not provide services to clients. In the United States, Deloitte refers to one or more of the US member firms of DTTL, their related entities that operate using the "Deloitte" name in the United States and their respective affiliates. Certain services may not be available to attest clients under the rules and regulations of public accounting. Please see www.deloitte.com/about to learn more about our global network of member firms.

    Learn more about Deloitte's work for the US Olympic Committee