(Podcast) Ethical Technology and Trust | Deloitte Insights

A business-critical goal: Ethical tech and trust Insights In Depth: Tech Trends 2020

03 June 2020

We depend on technology?but do we trust it? Tanya talks with Scott Buchholz, David Danks, and Catherine Bannister about why embracing ethics in tech is so important for organizations.

Tanya Ott: In 1888, a 30-year-old physician and drug store owner living in Chicago came up with a way to harness alkaloids—the active part of a medicinal plants … think morphine, strychnine, codeine—into tiny granules. It made treating pain in patients safer and more effective. One hundred thirty-two years later, that small company—Abbott—is much bigger and doing things its founder surely never imagined.

Scott Buchholz: These days they make things such as pacemakers and a million other things.

Tanya: Things that go inside people’s bodies, but …

Learn more

Read the Ethical technology chapter

Explore Tech Trends 2020

Create a custom Tech Trends 2020 PDF

Learn about Deloitte’s services

Go straight to smart. Get the Deloitte Insights app.

Scott: A lot of the pacemakers these days actually have connectivity so that they can be monitored and managed outside the body. That’s true for glucose detection devices and some of the devices that help diabetics and a variety of other things.

Tanya: That means, patients really need to have a lot of trust that companies are looking out for them. So the people designing the products have to think through all the ways personal medical information—and the devices themselves—could be compromised?

I’m Tanya Ott and today we’re tackling ethical technology on this Tech Trends edition of Insights In Depth.

We’ve got several guides for today’s conversation and each of them looks at the issue through a slightly different lens.

Scott: Hello, everybody. This is Scott Buchholz. I have the privilege of being the chief technology officer for our government practice within Deloitte Consulting.

Tanya: He leads the annual technology trends research at Deloitte.

Scott: We try to look 18 to 24 months over the horizon for what’s going on in technology.

Tanya: Scott and his team talk to a lot of technology and business leaders across the C-suite, including the CIO and CMO of Abbott Laboratories.

Scott: One of the interesting things when we were talking with their CIO about this is he said, one of the things that they had to do was actually go and help their researchers understand that they needed to protect, for example, the pacemakers against being hacked or taken over by somebody with nefarious purposes in mind. Connectivity is wonderful it enables us to do things that we never could before. [The] sort of the flip side of that is it enables other people to do things that we probably don’t want them doing.

Tanya: Our other guides are Catherine Bannister, who leads development and performance for Deloitte’s U.S. firms …

Catherine Bannister: As part of that, I lead an effort that we call at Deloitte Tech Savvy, which is an effort to raise the fluency, or the ability to be conversant, about all things technology for all of our people.

Tanya: And David Danks.

David Danks: I’m the Thurstone professor of philosophy and psychology and head of the Department of Philosophy at Carnegie Mellon University in Pittsburgh, Pennsylvania. I’ve been spending a lot of time over the last decade working on the challenge of how to have technology support people’s values and interests rather than working against them. As a result, [I] have ended up thinking a lot about how companies can participate in this effort as well.

Tanya: David, that example of hacking into a medical device is a pretty extreme example. But there are a lot of reasons that people might be a little bit uncomfortable with this idea of that kind of technology.

David: Absolutely. When we think about the ways in which technology, especially in health care space, has transformed our lives, we often focus on the ways in which we might be gaining more control over our lives. We’re able to do things that we weren’t able to do before, but at the risk of ceding some of that control to companies where we don’t necessarily know what they’re going to do with the data or what kind of protections they put into place. For a lot of people, it’s exactly this kind of uncertainty and lack of knowledge that creates that problem and can create a real lack of trust with companies. And as we all know, when there’s a lack of trust, there will sometimes be a lack of adoption or lack of use. And it’s a real barrier that has to be tackled head on by companies. And, of course, by consumers as well in terms of educating themselves about the technologies they’re considering using.

Tanya: Catherine, you’re particularly attuned to this idea of what we’re not considering when it comes to ethical implications of technology.

Catherine: When you talk about ethics—and David can keep me honest here—the first exercise that you need to undergo is this ability to recognize ethical dilemmas. One of the things that we try to start with where we’re trying to embed an ethical mindset into technologists—I want to be clear, I’m a technologist by trade and technology is a wonderful thing—when you start to think about the ethical implications, you start to sometimes [see] the gloom and doom and the scary aspects, but, all of us do technology because we see the benefit. We see the opportunity that these technologies can bring us. What we want to do is to just broaden the scope of how you’re thinking about the technology that you’re designing or you’re implementing or you’re using, to think about what are the potential unintended consequences of the design or of the usage or of the data that you’re collecting and who has access to it and things like that. So it really is about practicing, recognizing the potential dilemmas that this technology presents, and then also broadening the way that you interrogate the decision to make sure that you’re coming at it from different angles and different aspects.

Tanya: David, when we talk about trust, first of all, what does that mean and why does it matter?

David: What a great question. Trust is one of those words that gets thrown around a lot. When we look, for example, in the academic literature, we see that there are at least a dozen different academic disciplines that have studied the nature of trust. And we get something on the order of 20 to 25 different definitions from those dozen-plus disciplines.

What we really need to do is step back and understand trust as a willingness to make yourself vulnerable in some way because of what you expect or believe or know about the trustees. That I as a trustor am willing to expose myself in some way. I make myself vulnerable, whether it’s by placing a technology into my body or using some technology in an online space or closer to home, trusting somebody to watch over my child when I’m not around. We make ourselves vulnerable because of what we know about the trustee, the individual in whom we’ve placed our trust, in whom we really have placed our values and things that matter to us.

What’s really important about thinking through trust in this way is that we’ve very quickly come to see that trust is not about reliability and it’s not about convenience or predictability. What it is, is it’s about having an understanding that the trustee, the person or the technology in whom you’ve placed these important values, is going to look after your values in the ways that you would if only you were around to take care of it. I can’t necessarily predict exactly how, say, my wife is going to respond if our daughter gets upset about something. But nonetheless, I completely trust her that she has the same values as I do, and she’ll respond in a sensible way. And she's heard this joke many times, but I can predict my car much better than my wife, but I trust my wife much more than my car.

What that gets to is the importance of thinking through trust. It’s not just about being able to predict what’s going to happen in the next five minutes or the next time I use this technology. It’s about what is the company behind the technology? What is the technology? What are the other users in this space? What are they going to do when things go wrong? Because things always go wrong.

Trust is not just about making a decision to use something now. It’s about believing that things will be made right when the inevitable failures or accidents happen. We can’t get everything right all the time. So trusting another person or a company or a technology really has a crucial component of an expectation and belief, deep-seeded belief that problems will be fixed and made right. That goes directly to challenges for companies.

Trust is not something that resides in a technology. It’s not just ones and zeros. It’s about how the users, the consumers, the people with whom the company interacts really think about everything that’s involved. The whole sort of corporate technological system, not just the particular thing that went in my body.

Tanya: One of the big challenges with technology and trust, particularly from the consumer perspective, is they oftentimes don’t actually understand how the technology works, so it’s sort of a bigger trust fall that they do.

Catherine: That’s right. I also think that, that’s a plug for why everybody needs to be more tech savvy and why we’re trying to elevate people’s tech savviness at Deloitte. But I also think that, going back to something that David just said about the trust in the organization is knowing that they will do the right thing. A step further back to that, in order to trust that the organization is going to do the right thing, you also trust that they’ve thought about what the bad things that could happen are. You trust that they’re having the conversations, that they’re recognizing the potential harms that the technologies could introduce to some or larger swaths of the user base, and you trust that they have the values and the policies and the rigor to make decisions that allow them to better predict what some of the unintended consequences might be. Then you trust that they will get in front and mitigate those risks and take action.

Tanya: For a long time, trust has been seen largely as a compliance issue or a public-relations issue. But we’re starting to see an evolution in the way that companies are approaching this idea of trust. What do you see happening?

Catherine: The thing about ethical technology and trust in technology, is that—and this is maybe just my opinion—if there is a law or regulation written, then everybody kind of knows what is on each side of that line. Right? You know what you can design for and how you have to protect the data and all of that. It goes back again to until everybody is collectively savvy enough about these technologies and these disrupters to even regulate and write laws about them, then you’re living in this gray area where you don’t have the strict guidelines to tell you what the answer is. The way that we approach ethical tech is that it’s a series of conversations, it’s a series of decisions that don’t have an obvious right answer, or there could be multiple right answers. But the journey is part of the decision-making process and making sure that you are having those debates and having that separate from, but aligned with, the compliance and the legal and that as kind of the baseline. But this is more about, when you don’t have a strict rule to follow, you still have to make decisions and you still have to feel like you came to those decisions in an ethical way and in a considered, thoughtful way.

David: One of the things that we can see happening, to build off of what Catherine is pointing out, is that there is a growing understanding and recognition among a lot of tech companies that, whether because they have to respond to internal pressures from their employees or because they’re recognizing the value, for example, in a literal economic value of a price premium that can be charged for trusted products, they’re recognizing the importance of producing ethical, responsible, trusted technology. This isn’t something any longer that is a kind of oh, it would be nice if we could produce technology that was ethical or trusted, but it’s not really required. Companies have come to understand, at least many of them, that it really is critical both internally and externally to the company that their products have the kinds of features that people want, that they are supported in the way that people need, and that trust itself has economic value. It isn’t that we have to decide between something that’s ethical or effective or profit making. In fact, all three of those can go very cleanly together.

Scott: What’s really interesting about what Catherine and David were just saying is that when we started on our chapter research for Ethical Technology and Trust, we originally had a hypothesis that somehow it was going to be that some companies were using trust as a competitive differentiator and an area of competitive advantage. What we came to realize is brands, and the concept of brand, is actually relatively synonymous with trust. If you think back to the history of brands over a couple hundred years, they were actually invented because you could no longer buy something from the local baker, the local cobbler and the brand was actually a substitute for the thing that you could trust, the person who was in your neighborhood. If you sort of fast-forward to today, it feels sometimes like some brands are relearning, some organizations are relearning the lessons of a couple hundred years ago of the importance of trust in actually managing brands and being market-facing.

Tanya: Scott, I want to follow up on that because one of the things that you’re really looking at is how do we build trust in organizations? I’d love to have you talk about the Abbott example that we mentioned at the top of the show and their 360-degree view of trust.

Scott: When we talk about a 360-degree view of trust, what we mean is not just technology. It also means making sure you’re thinking about people and processes. On the people perspective, it’s recognizing that every individual in the organization might be making an individual decision that impacts the overall organization’s trust and ethics and other things, so making sure that everybody is trained and informed and sensitized to the dimensions and what that means for their decisions and so forth. From a process perspective, it means taking a look at how we’re collecting information, how we’re making sure that we are actually honoring the processes, honoring the policies, honoring the values, and making sure that that’s embedded in things that everybody’s doing. And last, but certainly not least, it’s also looking at the technology and making sure that the technology aligns and supports the people and processes to make them better and more powerful, not just going against or making things harder.

When we talked with Abbott’s chief information officer and chief marketing officer for the 2020 report, they shared a few examples of their approach with us. For example, they train everybody from the board and the board subcommittees all the way down to individual employees, and they try to make sure that everybody is aware of the implications of things like data privacy and ethical choices and security. They hold focus groups with their users to try to make sure that people understand within the organization what trust and security and transparency and other things mean for their actual end customers. Then from a technology perspective, they’ve certainly invested in cybersecurity capabilities and embedded controls in their products to try to make sure that things that go into people’s bodies and the things that they produce are as safe and secure as they can make them.

Tanya: They do something called penetration testing, which is what?

Scott: They have ethical hackers who try to break into their devices and their systems. For example, before they put a pacemaker in people's bodies, they will have ethical hackers try to break in to make sure that the device is secure so that less ethical individuals out in the world, can’t do that either.

David: If I could just build on what Scott was saying, one of the things that's really key there is the translation from values and principles all the way down to everyday practices. This is one of the challenges that a lot of companies are coming to recognize when they think about their values and principles and especially how they connect to trust. Trust is not just about declaring that you have some value or that you have some high-level principle that is going to guide the operations of the company or the development of new technology. It comes to the on-the-ground, everyday practices of how coders do their job a little bit differently because they have trust as a goal or how hardware manufacturers do their specification in a somewhat more sophisticated way because they want to ensure that values of privacy and security are maintained in the product. This is one of the big challenges that’s confronting a large number of technology companies right now. This challenge of translating principles to practices? It’s not easy to do, but companies need to tackle the problem head on.

Catherine: You were referencing technology companies, but I actually think that technology has become ubiquitous. We at Deloitte have this common statement that every company is now a technology company. So organizations need to start thinking about how they’re using technology, even if they’re not a technology producer. When I think back to how we at Deloitte started our tech savvy journey, we’re not, per se, a technology producer. We do have a technology practice that advises and implements on technology solutions. But we have a vast variety of other practices and other services. So several years ago, we kind of looked around and said, if every company is a tech company, then that means that we’re a tech company. And that means that people who don’t do technology also need to understand how to talk about and use and advise on and adapt to these disruptors, these technology disruptors. We had to meet the nontechnologists where they work and bring them up to speed. Then you add in the idea that, OK, so now that you actually understand what digital is and what the cloud is and what analytics and A.I. and some of these things are and how they impact you in your day-to-day life and in the life of your clients, you also need to start adding in this layer of what are the impacts of these disrupters. Again, the reason that they are disruptive, in general, is for benefit. To make your work easier, to make your life easier, to advance the research and the ability to have success in all aspects of your life. But if you really are going to be what we call tech savvy, you need to be able to think about the potential impacts, good and bad, of these technologies and the disrupters, and then know how to mitigate and know how to react to those.

Tanya: Let’s talk a little bit about building trust into the technology itself. Of course, probably the very first step that you have to look at, Scott, is cybersecurity. Is it safe from hackers or other people that have intentions that are not the most honorable?

Scott: We’ve reached the point now where cybersecurity is clearly a problem everywhere for everyone. And a lot of it, when you think about it, really starts with knowing what data you have, knowing what data you’re collecting, making sure you’re not collecting and storing more data than you actually need. And then you have to figure out how do you secure that information and so forth. How do you protect the application itself? A lot of those things are more well understood now than ever before. There are a lot of approaches for doing it, which range from things on the process side like DevSecOps and automated testing and automated tools to ethical hacking and other options like we talked about before. A lot of it, though, for better or for worse, is really starting with a good understanding of the environment in which the technology is going to operate and then going back to some foundational things around cybersecurity and good cyber hygiene that are happily well-known and understood but unfortunately not always practiced as widely and as assiduously as would be helpful.

Tanya: Why that disconnect between the two? We know we should be doing this thing, but we’re not doing it.

Scott: Well, I don't know that I floss my teeth every night either, and I should definitely be doing that. Unfortunately, there are things that are good for us that we know we need to do that we don’t always do. It’s not necessarily because we are bad people. It can be because we get busy, because there are conflicting priorities, and because it is something that has downstream impacts that are unlikely, however important. It’s all sorts of different things, but really, it’s because we’re all human beings dealing with imperfect information and lots of pressures.

Tanya: David, you’ve got this concept that you call an ethics designer. What is that?

David: That’s a good question. Part of it is building, by analogy, on many of the successes that Scott was just referring to in the fields of security and privacy and also what we see when we look at the fields of user interface and user experience. What I mean is if we look back 20 years, technology development didn’t necessarily have a widespread focus on privacy, security, user interfaces. Certainly, many technology companies, for example, would build their technology without thinking very much about those issues. There wouldn't be somebody on the team who would be tasked with thinking about how to maintain the user’s security and privacy throughout the interaction with an app, for example. And what we've seen over the last, say, 20 years is the normalization of these roles within development and deployment teams. Part of designing a new product now is thinking about the user experience. Part of designing a product is to think about the security and privacy, whether through penetration testing or other sorts of techniques. So we’ve seen that these are no longer strange aspects of the process that people have to argue for, but instead are just part of the everyday ways in which we design and develop new products.

The thought is that we need to do something similar when it comes to ethical considerations. There are an enormous number of ethical decisions that are being made at every step of the design and development of a new product. Just the decision to tackle a particular challenge is itself already an ethical choice, because when we make that kind of a decision, what we’re essentially saying is that this is something that’s important. This is a problem that needs to be solved and those other problems are not quite as important. There are so many problems in the world that a company might try to solve with its products, and they have to pick and choose. And those are fundamentally ethical choices.

Now, as Catherine noted, a lot of people get scared by the word ethics. I would really want to push back on that because we’re all ethicists. We’re making ethical choices every moment of every day in our lives, and inevitably what that is, it’s just us trying to figure out how to live according to our values, whether the values we have in our role as a company or the values we have in our roles as parents or friends. At the end of the day, we need to recognize all of these ethical choices that are happening in the design and development processes, and we need to make them explicit within the teams that are developing the products.

So that’s the idea of having an ethical designer. It’s not that we want to have designers who are ethical. Of course we want that. But the idea is to have people whose role in a product team is explicitly to think about the ethical and societal impacts of what’s being built, to bring to the fore all of these value tradeoffs that are just the inevitable part of any design process, and to really embrace them rather than running from them or having them be decisions that get thrown out at the last second: “Well, I guess we have to do something, so we’ll do it this way.” Really to think from the very beginning and in a deliberate manner about the kinds of ethical and societal impacts that a product can have. And through that, the hope is that we can start to normalize these kinds of decisions.

Ethical technology, ethical products, are not ethical because of something that happens at the very end. It isn’t the case that we build the product we wanted and then we slap a few features on it that make it ethical. Rather, ethical design is a process itself. It’s a feature of the process. So what we really need to do is work through how to make that part of the normal activity that occurs.

Tanya: David, you envision a future where ethics designers are going to be embedded in every project, as you say. I’m wondering what those people are like. What world do they come from? Are these technologists who also are deeply steeped in philosophy or what is that kind of person?

David: Much as I wish that they all had to be philosophers, because that would be a dedicated jobs program for our students, that’s not actually required. As I mentioned, we are all already ethicists. I know that we don’t think of ourselves that way. Most people don’t self-describe as an ethicist and yet we already are. So there is some training and education that is needed, but it is actually much less than I think people usually think, as long as it is combined with practices that work to realize the values of the company. This goes back to Scott's earlier point about the importance of having the everyday life of the developers at Abbott be shaped by the values of the company. When there is a culture within a company of this kind of value-centric thinking, ethics-centric decision-making, then it doesn’t take too much extra training for somebody to be able to highlight the ethical and societal impacts. This really is a pretty low barrier to entry for most companies if they are really committed to making it happen.

Catherine: And David, I would add to that that there’s an intersection that we have found, and you probably have found as well, on inclusive organizations and more ethical decisions. And by inclusive or more diverse, I guess, design teams and decision teams, you by default start to bring in different viewpoints. It’s not necessarily surface level diversity or traditional diversity, but it’s the diversity of background, diversity of thought, and it could be one way to have that ethical designer on the team. Cennydd Bowles in his book Future Ethics called it a designated dissenter, someone whose job it is to be the devil’s advocate and push back and ask these questions. And another thing, another way is to just look around and see, do we have a variety of backgrounds and input and thought in this decision-making that’s going to challenge—who have we forgotten or what user group have we left out or what harm have we not thought about? You’re never going to get it 100-percent perfect, but the organizations that pay attention to both those things have better outcomes.

Tanya: Scott, as I’m listening to Catherine talk about the importance of diversity, it strikes me that large companies that operate globally face a lot of challenges that smaller companies don’t. One of them is the need to respect different cultural norms when it comes to technology, because a technology that may be perfectly appropriate in one culture may be viewed a completely different way in another. Can you talk to that a little bit?

Scott: Sure. I was talking with some national CIOs of a large retailer at one point, and we were having a discussion about the use of facial recognition and computer vision in retail stores. And we were going back and forth and there was a Canadian and an American and a number of Europeans. And all had fairly similar points of view about the idea of there’s this tradeoff between convenience and privacy. And they weren’t quite sure where the boundaries were. If you think about that for a moment and you project that, what that means is that organization, that global retailer needs to make a difficult series of choices. They might decide that what’s really important to them is having a common standard around the world, which might be great for some part of their markets and terrible for others. Conversely, they might decide that local choices are best and what they then might run the risk of suffering is brand image problems because they don’t behave the same way in different parts of the world. Therefore, the idea of who do we stand for and what do we stand for and what is permissible or not, while it sounds really simple, actually becomes a very complex challenge and series of tradeoffs that different organizations are going to have to tackle and deal with differently. It’s amplified by technology. It exists without it, but it’s certainly amplified by technology. And it’s going to be really interesting to see how people deal with these tradeoffs in the coming years.

Tanya: OK, David, your shots to talk a little bit about whether ethics is contextual. If a company is operating ethically when it comes to technology, is that a one-shot deal or is it different with different cultures?

David: I tend to think about ethics in a relatively big tent way. Ethics is not just what we do when we think we’re acting ethically. Ethics is about asking and trying to answer different kinds of questions. The first is what ought we value? That can be what we value as a company. What we value as citizens of parts of the world. What ought we value in other roles that we play? Then given whatever it is that we value, the second question is, what should we do in order to realize those values? The challenge for companies becomes figuring out which of their values are actually contextual and which are not. Some values are contextual. If we think about having the value of being polite to other people, politeness manifests itself in different ways, in different cultures, and in different contexts. So we recognize that there’s a kind of context-sensitivity to the value. It’s not that the value comes and goes, but what that means is going to change across different contexts. But then there are other kinds of values, other things that might be important, for example, being a trusted friend, that are much less sensitive to the context.

Careful listeners of the podcasts will have noticed that I said less sensitive to the context, not insensitive to the context, and that’s an important point. Almost all of our values depend in part on what’s happening around us. The ways in which we act are really subject to the context in which we find ourselves. That doesn’t mean that everything’s relative. It doesn’t mean that we are just creatures of peer pressure who do whatever everyone else around us is doing. We are sensitive to the world in which we find ourselves and all of the complexities of the world in which we find ourselves. And that’s right for companies to be sensitive in that way.

So what does that mean in practice? Well, what it means in practice is that companies should feel free, in a certain sense, to be contextual in their actions, to recognize that they might need to act differently in different contexts. So how do they avoid the risk that Scott was raising of being viewed as not having any principles at all? Well, what they need to do is they need to back up those actions with an explanation or with characterization or description in terms of the core values that are guiding them. They need to make sure that their customers, their users, the people they interact with, understand why they acted in different ways, in different places, and that they all stem from the same core values and interests.

Tanya: Catherine, David, Scott—thank you so much for joining us today. It’s been a great conversation.

Catherine: It’s a great topic. Thanks for having us.

Scott It’s been a lot of fun.

David: Thanks for letting me join in this conversation.

Tanya: That was Scott Buchholz, chief technology officer for Deloitte Consulting’s government practice; Catherine Bannister, who leads development and performance for Deloitte’s U.S. firms; and David Danks, head of the Department of Philosophy at Carnegie Mellon University. You also might have detected in the background the podcast debut of David’s 12-year-old Rhodesian ridgeback rescue dog Jessie, who is also working from home these days.

Stay tuned for more in-depth conversations with the people who are building the future right now in some of the most exciting developments at the intersection between technology and humanity. In this series on Insights In Depth, we’re taking a deep dive into each Tech Trend from our 2020 report. In other episodes, we’re talking about the macro forces molding tech, finance, and the future of IT; the rise of digital twins in manufacturing and beyond; human experience platforms; the expanding role of architecture; and the new technologies that are hovering just over the horizon.

You can also connect with us on Twitter at @DeloitteInsight. I’m on Twitter at @tanyaott1. I am Tanya Ott. Have a great day!

This podcast is produced by Deloitte. The views and opinions expressed by podcast speakers and guests are solely their own and do not reflect the opinions of Deloitte. This podcast provides general information only and is not intended to constitute advice or services of any kind. For additional information about Deloitte, go to Deloitte.com/about.