Viewing offline content

Limited functionality available

Dismiss
Deloitte UK
  • Services

    Highlights

    • CFO Advisory

      Bringing together the best of Deloitte to support CFOs. Whether developing skills or navigating business challenges, CFO Advisory can support.

    • Deloitte Ventures

      Connecting our clients to emerging start-ups, leading technology players and a whole raft of new Deloitte talent.

    • Towards net zero together

      Discover the people leading the change and what could be possible for your business.

    • Audit & Assurance

      • Audit
      • Audit - IASPlus
      • Assurance
    • Consulting

      • Core Business Operations
      • Customer and Marketing
      • Enterprise Technology & Performance
      • Human Capital
      • Strategy, Analytics and M&A
    • Financial Advisory

      • Mergers & Acquisitions
      • Performance Improvement
    • Legal

      • Legal Advisory
      • Legal Managed Services
      • Legal Management Consulting
    • Deloitte Private

      • Family Enterprises
      • Private Equity
      • Emerging Growth
      • Family Office
    • Risk Advisory

      • Accounting and Internal Controls
      • Cyber and Strategic Risk
      • Regulatory and Legal
    • Tax

      • Global Business Tax Services
      • Indirect Tax
      • Global Employer Services
  • Industries

    Highlights

    • Ecosystems & Alliances

      An engine to embrace and harness disruptive change

    • Resilience Reimagined

      Resilient organisations thrive before, during and after adversity. How will you become more resilient?

    • Consumer

      • Automotive
      • Consumer Products
      • Retail, Wholesale & Distribution
      • Transportation, Hospitality & Services
    • Energy, Resources & Industrials

      • Industrial Products & Construction
      • Mining & Metals
      • Energy & Chemicals
      • Power, Utilities & Renewables
      • Future of Energy
    • Financial Services

      • Banking
      • Capital Markets
      • Insurance
      • Investment Management
      • Real Estate
      • FinTech & Alternative Finance
    • Government & Public Services

      • Health & Human Services
      • Defence, Security & Justice
      • Central Government
      • Infrastructure, Transport and Regional Government
    • Life Sciences & Health Care

      • Health Care
      • Life Sciences
    • Technology, Media & Telecommunications

      • Telecommunications, Media & Entertainment
      • Technology
  • Insights

    Deloitte Insights

    Highlights

    • Deloitte Insights Magazine

      Explore the latest issue now

    • Deloitte Insights app

      Go straight to smart with daily updates on your mobile device

    • Weekly economic update

      See what's happening this week and the impact on your business

    • Strategy

      • Business Strategy & Growth
      • Digital Transformation
      • Governance & Board
      • Innovation
      • Marketing & Sales
      • Private Enterprise
    • Economy & Society

      • Economy
      • Environmental, Social, & Governance
      • Health Equity
      • Trust
      • Mobility
    • Organization

      • Operations
      • Finance & Tax
      • Risk & Regulation
      • Supply Chain
      • Smart Manufacturing
    • People

      • Leadership
      • Talent & Work
      • Diversity, Equity, & Inclusion
    • Technology

      • Data & Analytics
      • Emerging Technologies
      • Technology Management
    • Industries

      • Consumer
      • Energy, Resources, & Industrials
      • Financial Services
      • Government & Public Services
      • Life Sciences & Health Care
      • Technology, Media, & Telecommunications
    • Spotlight

      • Deloitte Insights Magazine
      • Press Room Podcasts
      • Weekly Economic Update
      • COVID-19
      • Resilience
      • Top 10 reading guide
  • Careers

    Highlights

    • Hear from our people

      At Deloitte, our people are at the heart of what we do. Discover their stories to find out more about Life at Deloitte.

    • Careers Home

  • UK-EN Location: United Kingdom-English  
  • UK-EN Location: United Kingdom-English  
    • Dashboard
    • Saved Items
    • Content feed
    • Profile/Interests
    • Account settings

Welcome back

Still not a member? Join My Deloitte

Can AI be ethical?

by David Schatsky, Vivek (Vic) Katyal, Satish Iyengar, Rameeta Chauhan
  • Save for later
  • Download
  • Share
    • Share on Facebook
    • Share on Twitter
    • Share on Linkedin
    • Share by email
Deloitte Insights
  • Strategy
    Strategy
    Strategy
    • Business Strategy & Growth
    • Digital Transformation
    • Governance & Board
    • Innovation
    • Marketing & Sales
    • Private Enterprise
  • Economy & Society
    Economy & Society
    Economy & Society
    • Economy
    • Environmental, Social, & Governance
    • Health Equity
    • Trust
    • Mobility
  • Organization
    Organization
    Organization
    • Operations
    • Finance & Tax
    • Risk & Regulation
    • Supply Chain
    • Smart Manufacturing
  • People
    People
    People
    • Leadership
    • Talent & Work
    • Diversity, Equity, & Inclusion
  • Technology
    Technology
    Technology
    • Data & Analytics
    • Emerging Technologies
    • Technology Management
  • Industries
    Industries
    Industries
    • Consumer
    • Energy, Resources, & Industrials
    • Financial Services
    • Government & Public Services
    • Life Sciences & Health Care
    • Tech, Media, & Telecom
  • Spotlight
    Spotlight
    Spotlight
    • Deloitte Insights Magazine
    • Press Room Podcasts
    • Weekly Economic Update
    • COVID-19
    • Resilience
    • Top 10 reading guide
    • UK-EN Location: United Kingdom-English  
      • Dashboard
      • Saved Items
      • Content feed
      • Profile/Interests
      • Account settings
    9 minute read 17 April 2019

    Can AI be ethical? Why enterprises shouldn’t wait for AI regulation

    9 minute read 17 April 2019
    • David Schatsky United States
    • Vivek (Vic) Katyal United States
    • Satish Iyengar United States
    • Rameeta Chauhan India
    • Rameeta Chauhan India
    • Save for later
    • Download
    • Share
      • Share on Facebook
      • Share on Twitter
      • Share on Linkedin
      • Share by email
    • Signals
    • As adoption of increasingly capable AI grows, ethics concerns emerge
    • AI systems may pose diverse ethical risks
    • The marketplace is taking action on AI ethics
    • How companies can make AI ethics a priority
    • Balance the benefits and risks of AI

    With AI applications becoming ubiquitous in and out of the workplace, can the technology be controlled to avoid unintended or adverse outcomes? Organizations are launching a range of initiatives to address ethical concerns.

    With great power, the saying goes, comes great responsibility. As artificial intelligence (AI) technology becomes more powerful, many groups are taking an interest in ensuring its responsible use. The questions that surround AI ethics can be difficult, and the operational aspects of addressing AI ethics are complex. Fortunately, these questions are already driving debate and action in the public and commercial sectors. Organizations using AI-based applications should take note.

    Signals

    • In 2018, media mentions around AI and ethics doubled as compared to the previous year, with over 90 percent indicating positive or neutral sentiment.1
    • About a third of executives in a recent Deloitte survey named ethical risks as one of the top three potential concerns related to AI.2
    • Since 2017, more than two dozen national governments have released AI strategies, road maps, or plans that focus on developing ethics standards, policies, regulations, or frameworks.3
    • Governments are setting up AI ethics councils or task forces and collaborating with other national governments, corporations, and other organizations on the ethics of AI.4
    • Major technology companies such as Google, IBM, and Facebook have developed tools, designed guidelines, and appointed dedicated AI governance teams to address ethical issues, such as bias and lack of transparency.5
    • Enterprises across industries such as financial services, life sciences and health care, retail, and media are joining consortia to collaborate with technology vendors, universities, governments, and other players in their respective industries to promote ethical AI standards and solutions.6

    As adoption of increasingly capable AI grows, ethics concerns emerge

    Learn more

    Explore the Signals for Strategists collection

    Subscribe to receive related content

    Download the Deloitte Insights app

    A growing number of companies see AI as critical to their future. But concerns about possible misuse of the technology are on the rise. In a recent Deloitte survey, 76 percent of executives said they expected AI to “substantially transform” their companies within three years,7 while about a third of respondents named ethical risks as one of the top three concerns about the technology. The press has widely reported incidents in which AI has been misused or had unintended consequences.8

    The conversation about responsible AI is hardly limited to concerns about controversial applications of the technology, such as automated weapons. It also considers how the infusion of AI into common activities such as social media interactions, credit decisions, and hiring can be controlled to avoid unintended or adverse outcomes for individuals and businesses. The discussion around AI and ethics has grown far more urgent in the last decade or so, and many initiatives to tackle ethical questions surrounding AI have taken shape in the last couple of years. This is primarily driven by recent advancements in AI technologies, growing adoption, and increasing criticality of AI in business decision-making.

    It’s worth noting that concerns about the ethics of technology generally and AI specifically are nothing new. The topic was explored at least as far back as 1942 when science-fiction writer Isaac Asimov introduced his Three Laws of Robotics in a short story.9 In 1976, a German-American computer scientist suggested that AI technology should not be used to replace people in positions that require abilities such as compassion, intuition, and creativity.10 Still, today’s AI presents enormous opportunities for businesses while introducing some novel risks that need to be managed.

    AI systems may pose diverse ethical risks

    Some of the ethical risks associated with AI use differ from those associated with conventional information technology. This is due to a variety of factors, including the role played by large datasets in AI systems, the novel applications of AI technology (such as facial recognition), and the capabilities that some systems demonstrate, from automatic learning to superhuman perception. As MIT professors Stefan Helmreich and Heather Paxson note, “Ethical judgments are built into our information infrastructures themselves. That’s what AI does: It automates judgments—yes, no; right, wrong.”11 Prominent issues associated with ethical AI design, development, and deployment include the following:

    Bias and discrimination. AI systems learn from the datasets with which they are trained. Depending on how a dataset is compiled or constructed, the potential exists that the data could reflect assumptions or biases—such as gender, racial, or income biases—that could influence the behavior of a system based on that data. These systems’ developers intend no bias, but many have reported AI-driven instances of bias or discrimination in application areas such as recruiting, credit scoring, and judicial sentencing.12 Organizations need to ensure that their AI solutions make decisions fairly and do not propagate biases when providing recommendations.

    Lack of transparency. It is natural for customers or other parties affected by technology to want to know something about how the system that affected them works—what data it is using and how it is making decisions. However, much AI development entails building highly effective models whose inner workings are not well understood and cannot be readily explained—they are black boxes. Techniques are emerging that help shine light inside the black box of certain machine learning models, making them more interpretable and accurate, but they are not suitable for all applications.13 Ethical AI use takes into account a responsibility to be transparent about the workings of systems and the use of data wherever possible.

    Erosion of privacy. Many companies collect large quantities of personal data from consumers when they register for or use products or services. That data can be used to train AI-based systems for purposes such as targeted advertising and promotions and personalization. Ethical issues arise when that information is used for a different purpose—say, to train a model for making employment offers—without users’ knowledge or consent. A recent study highlighted that 60 percent of customers are concerned about AI-based technology compromising their personal information.14 To build customer trust, companies need to be transparent about how collected information is being used, create clearer mechanisms for consent, and better protect individual privacy.

    Poor accountability. With AI technologies increasingly automating the decision-making process for a wide range of critical applications, such as autonomous driving, disease diagnosis, and wealth management, the question arises who should bear responsibility for the harm with which these AI systems may be associated. For instance, if a self-driving car doesn’t stop after seeing a pedestrian and hits the individual, who should be held responsible: the car manufacturer, passenger, or owner? Existing accountability mechanisms for IT systems fail to adequately address such scenarios. Businesses, governments, and the public need to work toward establishing proper accountability structures.

    Workforce displacement and transitions. Companies are already using AI to automate tasks, with some aiming to take advantage of automation to reduce their workforces. In the 2018 Deloitte executive survey, 36 percent of respondents saw job cuts from AI-driven automation rising to the level of an ethical risk.15 Even jobs that are not eliminated may be impacted in some way by AI. Employers should find ways to use AI to increase opportunities for employees while mitigating negative impacts.

    The marketplace is taking action on AI ethics

    The increasing adoption of AI technologies, and growing awareness of various ethical risks associated with them, calls for urgency in designing approaches and mechanisms to deal with those risks. Governments, technology vendors, corporates, academic institutions, and others have already started laying the foundation for ethical AI use.

    Tech vendors at the forefront

    Many of the technology vendors creating AI tools and platforms are also at the forefront of ethical AI development efforts. Major technology companies including Google and IBM have developed ethical guidelines to govern the use of AI internally as well as guide other enterprises.16 For instance, while releasing its ethical guidelines, Google pledged to not develop AI specifically for weaponry, or for surveillance tools that would violate “internationally accepted norms.”17 Additionally, many technology vendors have launched or open-sourced tools to address ethical issues such as bias and lack of transparency in AI development and deployment. Examples include Facebook’s Fairness Flow, IBM’s AI Fairness 360 and AI OpenScale environment, and Google’s What-if tool.18

    Governments and regulators are already very active

    Governments and regulators have already begun to play a crucial role in establishing policies and guidelines to tackle AI-related ethical issues. For instance, the European Union’s General Data Protection Regulation (GDPR) requires organizations to be able to explain decisions made by their algorithms.19 This is just an example from the growing list of national governments—such as the United States, the United Kingdom, Canada, China, Singapore, France, and New Zealand—that have released AI strategies, road maps, or plans focusing on developing ethical standards, policies, regulations, or frameworks.20 Other notable government initiatives include setting up AI ethics councils or task forces, and collaborating with other national governments, corporates, and other organizations.21 Though most of these efforts are still in initial phases and do not impose binding requirements on companies (with GDPR a prominent exception), they signal growing urgency about AI ethical issues.

    Academia is driving research and education

    Universities and research institutions are playing an important role as well. Not only do they educate those who design and develop AI-based solutions—they are researching ethical questions and auditing algorithms for the public good. A number of universities, including Carnegie Mellon and MIT, have launched courses dealing specifically with AI and ethics.22 MIT also created a platform called Moral Machine23 to crowdsource data and effectively train self-driving cars to respond to a variety of morally fraught scenarios. Indeed, ethics was a central theme at the recent launch of MIT’s new Schwartzman College of Computing.24 Moreover, academics are getting seats on AI governance teams at many technology companies and other enterprises as external advisers to help guide the responsible development of AI applications.25

    Nonprofits and corporates are engaging on AI ethics

    Consortia and think tanks are bringing together technology companies, governments, nonprofit organizations, and academia to collaborate on a complex and evolving set of AI-related ethical issues, leverage each other’s expertise and capabilities, and simultaneously build the AI ecosystem. One such consortium is the Partnership on AI, which counts 80-plus partner organizations.26 Companies across sectors are working to adopt ethical AI practices such as establishing ethics boards and retraining employees, and professional services firms are guiding clients on these issues.27

    How companies can make AI ethics a priority

    Technological progress tends to outpace regulatory change, and this is certainly true in the field of AI. But organizations may not want to wait for AI-related regulation to catch up. To protect their stakeholders and their reputations, and to fulfill their ethical commitments, organizations can do many things now as they design, build, and deploy AI-powered systems.28

    Enlist the board, engage stakeholders

    Since any AI-related ethical issue may carry broad and long-term risks—reputational, financial, and strategic—it is prudent to engage the board to address AI risks. Ideally, the task should fall to a technology or data committee of the board or, if no such committee exists, the entire board.

    Designing ethics into AI starts with determining what matters to stakeholders such as customers, employees, regulators, and the general public. Companies should consider setting up a dedicated AI governance and advisory committee including cross-functional leaders and external advisers that would engage with stakeholders, including multi-stakeholder working groups, and establish and oversee governance of AI-enabled solutions including their design, development, deployment, and use. With regulators specifically, organizations need to stay engaged to not only to track evolving regulations but to shape them.

    Leverage technology and process to avoid bias and other risks

    AI developers need to be trained to test for and remediate systems that unintentionally encode bias and treat users or other affected parties unfairly. Researchers and companies are introducing tools and techniques that can help. These include analytics tools that can automatically detect how data variables may be correlated with sensitive variables such as age, sex, or race; tests for algorithmic bias that may generate decisions that are unfair to certain populations; and methods for auditing and explaining how machine learning algorithms generate their outputs. Companies will need to integrate new technologies, control structures, and processes to manage these risks.29 Organizations should stay informed of developments in this area and ensure they have processes in place to use them appropriately.

    Build trust through transparency

    In an era of opaque, automated systems and deepfakes (realistic but synthetic images, videos, speech, or text generated by AI systems),30 companies can help build trust with stakeholders by being transparent about their use of AI. For instance, rather than masquerade as humans, intelligent agents or chatbots should identify themselves as such. Companies should disclose the use of automated decision systems that affect customers. Where possible, companies should clearly explain what data they collect, what they do with it, and how that usage affects customers.

    Help alleviate employee anxiety

    Whether AI will eliminate jobs or transform them, it is likely that the technology eventually will affect many, if not most, jobs in some way. An ethical response for companies is to begin advising employees on how AI may affect their jobs in the future. This could include retaining workers whose tasks are expected to get automated or whose work will likely entail using automated systems—or giving them time to seek new employment. Companies in technology, financial services, energy and resources, and telecom have already started preparing their employees to stay relevant in AI-driven future.31

    Balance the benefits and risks of AI

    New technology always brings benefits and risks, and AI is no different. Wise leaders seek to balance risks and benefits to achieve their goals and fulfill their responsibilities to their diverse stakeholders. Even as they seek to take advantage of AI technology to improve business performance, companies should consider the ethical questions raised by this technology and begin to develop their capacity to leverage it effectively and in an ethically responsible way.

    Acknowledgments

    The authors would like to thank Yang Chu and Ishita Kishore of Deloitte & Touche LLP, Sachin Maheshwari of Deloitte FAS India Pvt. Ltd., and Jonathan Camhi of Deloitte LLP.

    Cover image by: Molly Woodworth

    Endnotes
      1. Deloitte’s Quid analysis. View in article

      2. Jeff Loucks, Tom Davenport, and David Schatsky, State of AI in the enterprise, 2nd edition, Deloitte Insights, October 22, 2018. View in article

      3. Tim Dutton, “An overview of national AI strategies,” Medium, June 29, 2018. View in article

      4. Zoey Chong, “New AI ethics council in Singapore will give smart advice,” CNet, June 5, 2018; Sydney J. Freedberg, “Joint Artificial Intelligence Center created under DoD CIO,” Breaking Defense, June 29, 2018; Zoë Bernard, “The first bill to examine ‘algorithmic bias’ in government agencies has just passed in New York City,” Business Insider, December 19, 2017; Tim Sandle, “France and Canada collaborate on ethical AI,” Digital Journal, June 10, 2018; Amanda Russo, “United Kingdom partners with World Economic Forum to develop first artificial intelligence procurement policy,” World Economic Forum, September 20, 2018; CIFAR, “AI & society,” 2017. View in article

      5. Alex Hern, “DeepMind announces ethics group to focus on problems of AI,” Guardian, October 4, 2017; Sundar Pichai, “AI at Google: Our principles,” Google, June 7, 2018; Kyle Wiggers, “Google’s What-If tool for TensorBoard helps users visualize AI bias,” VentureBeat, September 11, 2018; Adam Cutler, Milena Pribić, and Lawrence Humphrey, “Everyday ethics for artificial intelligence,” IBM, September 2018; Stephan Shankland, “Facebook starts building AI with an ethical compass,” CNet, May 2, 2018. View in article

      6. Peter High, “Bank of America and Harvard Kennedy School announce the Council on the Responsible Use of AI,” Forbes, April 23, 2018; Finextra, “Singapore’s MAS preps guidelines for ethical use of AI and data analytics,” April 4, 2018; Big Data Institute, “Oxford secures £17.5 million to lead national programmes in AI to improve healthcare,” November 6, 2018; Partnership on AI, “Zalando,” accessed March 28, 2019; Partnership on AI, “BBC,” accessed March 28, 2019. View in article

      7. Thomas H. Davenport, Jeff Loucks, and David Schatsky, The 2017 Deloitte state of cognitive survey, Deloitte, November 2017. View in article

      8. Abigail Beall, “It’s time to address artificial intelligence’s ethical problems,” Wired, August 24, 2018. View in article

      9. arXiv, “Do we need Asimov’s laws?” MIT Technology Review, May 16, 2014. View in article

      10. Christoph Schulze, “Ethics and AI,” University of Maryland, 2012. View in article

      11. Stefan Helmreich and Heather Paxson, “Computing is deeply human,” MIT School of Humanities, Arts, and Social Sciences, February 18, 2019. View in article

      12. Jeffrey Dastin, “Amazon scraps secret AI recruiting tool that showed bias against women,” Reuters, October 10, 2018; Kaveh Waddell, “How algorithms can bring down minorities’ credit scores,” Atlantic, December 2, 2016; Ed Yong, “A popular algorithm is no better at predicting crimes than random people,” Atlantic, January 17, 2018. View in article

      13. David Schatsky and Rameeta Chauhan, Machine learning and the five vectors of progress, Deloitte Insights, November 29, 2017. View in article

      14. Salesforce Research, Trends in customer trust, accessed March 27, 2019. View in article

      15. Loucks, Davenport, and Schatsky, State of AI in the enterprise, 2nd edition. View in article

      16. Pichai, “AI at Google: Our principles;” Adam Cutler, Milena Pribić, and Lawrence Humphrey, Everyday ethics for artificial intelligence, IBM, September 2018. View in article

      17. Dave Gershgorn, “Google’s new ethics rules forbid using its AI for weapons,” Quartz, June 8, 2018. View in article

      18. Kush R. Varshney, “Introducing AI Fairness 360,” IBM Research blog, September 19, 2018; Natasha Lomas, “IBM launches cloud tool to detect AI bias and explain automated decisions,” TechCrunch, September 19, 2018; Wiggers, “Google’s What-If tool for TensorBoard helps users visualize AI bias.” View in article

      19. David Meyer, “AI has a big privacy problem and Europe’s new data protection law is about to expose it,” Fortune, May 25, 2018. View in article

      20. Dutton, “An overview of national AI strategies.” View in article

      21. Chong, “New AI ethics council in Singapore will give smart advice;” Freedberg, “Joint Artificial Intelligence Center created under DoD CIO;” Bernard, “The first bill to examine ‘algorithmic bias’ in government agencies has just passed in New York City;” Sandle, “France and Canada collaborate on ethical AI;” Russo, “United Kingdom partners with World Economic Forum to develop first artificial intelligence procurement policy.” View in article

      22. Jeremy Hsu, “College AI courses get an ethics makeover,” Discover, April 26, 2018; Natalie Saltiel, “The ethics and governance of artificial intelligence,” MIT Media Lab course, November 16, 2017. View in article

      23. MIT, “Moral Machine,” accessed March 27, 2019. View in article

      24. MIT School of Humanities, Arts, and Social Sciences, “Ethics, computing, and AI,” February 18, 2019. View in article

      25. Alex Hern, “DeepMind announces ethics group to focus on problems of AI,” Guardian, October 4, 2017; Aurel Dragan, “SAP launches the first Guide to Artificial Intelligence and an external commission of AI ethics consultants,” Business Review, September 20, 2018; PR Newswire, “Axon launches first artificial intelligence ethics board for public safety; promotes responsible development of AI technologies,” April 26, 2018. View in article

      26. Partnership on AI, “Frequently asked questions,” accessed March 27, 2019. View in article

      27. PR Newswire, “Axon launches first artificial intelligence ethics board for public safety;” Rachel Louise Ensign, “Bank of America’s workers prepare for the bots,” Wall Street Journal, June 19, 2018; Thomas H. Davenport and Vivek Katyal, “Every leader’s guide to the ethics of AI,” MIT Sloan Management Review, December 6, 2018; Ali Hashmi, AI ethics: The next big thing in government, Deloitte, February 2019. View in article

      28. For a fuller treatment of some of these topics, see Davenport and Katyal, “Every leader’s guide to the ethics of AI.” View in article

      29. For more on managing algorithmic risk, see Dilip Krishna, Nancy Albinson, and Yang Chu, Managing algorithmic risks, Deloitte, 2017. View in article

      30. John Villasenor, “Artificial intelligence, deepfakes, and the uncertain future of truth,” Brookings, February 14, 2019 View in article

      31. Mark MacCarthy, “Planning for artificial intelligence’s transformation of 21st Century jobs,” CIO, March 6, 2018; Ensign, “Bank of America’s workers prepare for the bots;” Genpact, “New ways of working with artificial intelligence,” accessed March 27, 2019. View in article

    Show moreShow less

    Topics in this article

    Technology Management , Emerging technologies , Artificial intelligence (AI) , Risk management , Signals for Strategists

    Value-based data risk management

    Deloitte & Touche LLP’s Data Risk Services practice has helped organizations address their data risks for more than 10 years. Our work in helping clients to preserve, maintain, and create value with their data assets includes the design, implementation, and management of data-driven strategies to mitigate risks and drive growth and efficiencies. 

    Learn more
    Get in touch
    Contact
    • ​David Schatsky
    • Managing director, US Innovation
    • Deloitte LLP
    • dschatsky@deloitte.com
    • +1 646 582 5209

    Download Subscribe

    ​Related content

    img Trending

    Interactive 3 days ago

    More in Signals for Strategists

    • How third-party information can enhance data analytics Article4 years ago
    • Democratizing data science to bridge the talent gap Article4 years ago
    • Pervasive intelligence: Smart machines everywhere Article4 years ago
    • Wearable technologies can boost employees’ abilities Article4 years ago
    • Expecting digital twins Article4 years ago
    • Robots uncaged Article5 years ago
    David Schatsky

    David Schatsky

    Managing Director | Deloitte LLP

    David analyzes emerging technology and business trends for Deloitte’s leaders and clients. His recent published works include Signals for Strategists: Sensing Emerging Trends in Business and Technology (Rosetta Books 2015), “Demystifying artificial intelligence: What business leaders need to know about cognitive technologies,” and “Cognitive technologies: The real opportunities for business” (Deloitte Insights 2014-15). Before joining Deloitte, David led two research and advisory firms.

    • dschatsky@deloitte.com
    Vivek (Vic) Katyal

    Vivek (Vic) Katyal

    Principal | Deloitte & Touche LLP

    Vic is Deloitte Risk and Financial Advisory’s chief operating officer (COO). He oversees business operations and finance and focuses on driving more consistency, accuracy, timeliness, efficiency, and accountability into financial and operational management processes, including planning, forecasting, deployment, pricing, delivery costs, asset development, billing/collections, expenses management, etc. As COO, Vic is an integral part of Risk and Financial Advisory’s Executive Committee and works closely with the chief executive officer to prioritize the goals of the practice and align with our vision and strategy. He is Risk and Financial Advisory’s representative on the overall Firm’s Operating Committee. Vic has held a variety of previous leadership positions within Risk and Financial Advisory. Prior to his COO role, he recently served as the Global Risk Analytics, Global Data Risk, and US Risk and Financial Advisory’s Analytics leader, supporting cross-function and cross-border collaboration. Additionally, Vic has supported innovation and growth in the market as a Solution leader for the Cyber Data Platform within Cyber Risk, as well as the Enablement and Operate lead in the Regulatory and Operations Risk Market Offering. In addition to his leadership roles, Vic primarily serves cyber and data risk domains at his clients and manages professionals who support those areas, with a particular focus in the Banking & Securities Industry at the top five US financial services institutions. He has led a number of artificial intelligence (AI), analytics, and data risk management initiatives at his clients, helping them design and implement data programs and technology-enabled solutions to control, monitor, manage, and glean insights from data assets.

    • vkatyal@deloitte.com
    • +1 612 397 4772
    Satish Iyengar

    Satish Iyengar

    Satish Iyengar is a senior manager at Deloitte & Touche LLP, based in Minneapolis. He is an advisory professional with more than 16 years of management consulting experience in the areas of data and AI risk management. As part of the cyber data risk team at Deloitte Risk & Financial Advisory, Iyengar works on the assessment, design, development, and implementation of data, analytics, and AI strategies and solutions. Connect with him on LinkedIn at www.linkedin.com/in/satish-iyengar-170210/ and on Twitter @iyengar33.

    • siyengar@deloitte.com
    Rameeta Chauhan

    Rameeta Chauhan

    Assistant Manager

    Rameeta Chauhan is an Assistant Manager at Deloitte Services India Pvt. Ltd. She tracks and analyzes emerging technology and business trends, with a primary focus on cognitive technologies, for Deloitte’s leaders and its clients. Prior to Deloitte, she worked with multiple companies as part of technology and business research teams.

    • ramchauhan@deloitte.com
    • +1 678 299 9739

    Share article highlights

    See something interesting? Simply select text and choose how to share it:

    Email a customized link that shows your highlighted text.
    Copy a customized link that shows your highlighted text.
    Copy your highlighted text.

    Can AI be ethical? has been saved

    Can AI be ethical? has been removed

    An Article Titled Can AI be ethical? already exists in Saved items

    Invalid special characters found 
    Forgot password

    To stay logged in, change your functional cookie settings.

    OR

    Social login not available on Microsoft Edge browser at this time.

    Connect Accounts

    Connect your social accounts

    This is the first time you have logged in with a social network.

    You have previously logged in with a different account. To link your accounts, please re-authenticate.

    Log in with an existing social network:

    To connect with your existing account, please enter your password:

    OR

    Log in with an existing site account:

    To connect with your existing account, please enter your password:

    Forgot password

    Subscribe

    to receive more business insights, analysis, and perspectives from Deloitte Insights
    ✓ Link copied to clipboard
    • Contact us
    • Careers at Deloitte
    • Submit RFP
    Follow Deloitte Insights:
    Global office directory Office locations
    UK-EN Location: United Kingdom-English  
    About Deloitte
    • Home
    • Press releases
    • Newsroom
    • Deloitte Insights
    • Global Office Directory
    • Office locator
    • Contact us
    • Submit RFP
    Services
    • Audit & Assurance
    • Consulting
    • Financial Advisory
    • Legal
    • Deloitte Private
    • Risk Advisory
    • Tax
    Industries
    • Consumer
    • Energy, Resources & Industrials
    • Financial Services
    • Government & Public Services
    • Life Sciences & Health Care
    • Technology, Media & Telecommunications
    Careers
    • Careers Home
    • About Deloitte
    • About Deloitte UK
    • Accessibility statement
    • Cookies
    • Health and Safety
    • Modern Slavery Act Statement
    • Privacy statement
    • Regulators & Provision of Services Regulations
    • Deloitte LLP Subprocessors
    • Supplier Standard Terms & Conditions
    • Terms of Use

    © 2023. See Terms of Use for more information.

     

    Deloitte LLP is the United Kingdom affiliate of Deloitte NSE LLP, a member firm of Deloitte Touche Tohmatsu Limited, a UK private company limited by guarantee (“DTTL”). DTTL and each of its member firms are legally separate and independent entities. DTTL and Deloitte NSE LLP do not provide services to clients. Please see About Deloitte to learn more about our global network of member firms.

     

    Deloitte LLP is a limited liability partnership registered in England and Wales with registered number OC303675 and its registered office at 1 New Street Square, London EC4A 3HQ, United Kingdom. A list of members of Deloitte LLP is available at Companies House.