As expectations around artificial intelligence grow, organizations will have to bring human thinking into their products. To do so, it is imperative for communities that will be part of the solution to actively participate in the product design as well.
Business aspirations for artificial intelligence (AI) are at an all-time high. Organizations are working toward a near future where AI technologies drive cars, treat medical issues, and coordinate citywide public resources to improve safety and wellness. Simultaneously, there is a recognition that to achieve this state, designers of these AI solutions need to address some significant issues.1 From a data quality perspective, this entails gathering accurate, useful data. Strategically, this requires the solutions to align with real business needs, and in terms of change management, designers need to build an AI system that intuitively aligns to user workflows.
Explore the AI & cognitive technologies collection
Subscribe to receive related content from Deloitte Insights
As our collective ambitions for AI solutions increase, so do the complexities of the above-mentioned issues. Executives are starting to acknowledge just how difficult it is to build transformative AI systems. For instance, in Deloitte’s 2018 State of AI in the Enterprise study, 56 percent of leaders indicated that AI will transform their business within three years—down from 76 percent a year earlier.2 And in terms of implementations, another study revealed that leaders often intend to apply these technologies—infused with AI—to disrupt their industry.3 But increased complexity leads to many settling for incremental changes to legacy processes and systems rather than building new, innovative solutions.
We see this taking shape in, for instance, health care. There are a number of examples that highlight how AI can help doctors more accurately diagnose symptoms and, in understaffed hospitals, even offer treatment recommendations for less complex health issues. However, there are larger aspirations to expand the use of AI to more complex treatments and issues. In these cases, hospitals still need to reconcile the technical and data limitations (for example, disparate patient data sources not communicating with one another), along with accounting for ethical and regulatory issues (who owns the data and how much decision-making power does the algorithm have?).4
AI systems must fit into the ecosystems they support if they are to be successful. How does one design their AI products to do this? For our purposes, we lean on computer scientist Kris Hammond’s definition, where AI constitutes any program that “does something we would normally think of as intelligent in humans. How the program does it is not the issue, just that it is able to do it at all. That is, it is AI if it is smart, but it doesn’t have to be smart like us.”5 This implies AI is smart in a “narrow” sense.6 While we can design AI programs that are “smarter” than us, they won’t necessarily work well with us. For instance, an algorithm can quickly be trained to identify animal breeds more efficiently than humans. However, these algorithms can also be “tricked” into obviously incorrect answers (think kittens identified as household objects). It’s because these algorithms are smart in that narrow sense. Unlike humans, they don’t possess characteristics such as common sense.
To counter this intrinsic issue, organizations can turn to human-centered design (HCD) to make intelligent systems effective. Informed by behavioral science and psychology, HCD makes AI programs more intuitive and usable for end-users. In essence, HCD is a means to direct AI in a manner that augments and enhances our intelligence.7 To illustrate, consider Nest Thermostat. Through HCD, Nest, which deploys AI to analyze users’ energy usage, found that people more efficiently adjust their energy consumption if the thermostat proactively sets a default schedule to subtly decrease usage. It’s effective because it not only analyzes energy usage but also adjusts it in a very human-centric manner that makes behavior change actionable—in this case, it allows the user to “opt out” of setting the thermostat schedule.8
But as AI-driven innovation efforts increase in scope, it can be difficult to pinpoint many of the potential hurdles for human and machine interaction. Designing a smart product such as Nest Thermostat is one thing, but what about the smart home—or even the smart neighborhood where any number of products and users are expected to work in concert? From smart products to smart systems, designing for an end-user interacting with a single product is different than designing for an entire system of products and stakeholders. To do so requires an in-depth methodology that brings to light how an entire community of individuals interact with (or are affected by) the technologies. Broadening that scope requires the communities that are part of the solution to be part of the design process as well. By doing so, we can better uncover potential system issues and target where HCD principles can best be applied.9
In this article, we discuss how one method, participatory design, positions stakeholders to take a hands-on role in designing complex AI implementations. Participatory design embeds the needs of an impacted community directly into the design process to develop more sustainable solutions, thereby uncovering where HCD principles can best be deployed. We analyze how participatory design, coupled with AI, can be used in two cases—improving health communications and designing complex, “smart” cities. We close with a discussion on how this methodology has potential for the workplace, where AI is increasingly integrated into large-scale applications.
Participatory design is more of a discovery method that informs your design strategy rather than a specific set of design principles. Similar to an agile methodology, participatory design leverages stakeholders to develop better systems—regardless of how complex the environment (see the sidebar, “Agile and participatory design: Bringing your stakeholders to the table”). As importantly, this co-design process is expected to ensue from initial discovery to long after system implementation.
We see the benefits of this methodology manifest in three meaningful ways:
Applying these insights to a design process can position AI products to solve the right problems, for the right group of stakeholders, in an intuitive manner that makes long-term adoption especially appealing.
Participatory design shares many similarities to agile design, a process that works in short, iterative sprints to deploy solutions, gather customer feedback, and adjust accordingly for future sprints. Participatory design differentiates itself in two primary ways: First, participatory traditionally considers a much wider group of stakeholders. Rather than rely on a single group to represent the “end-user,” participatory design requires a representation of the entire “community” impacted by the project.15 For example, if a call center implements a chatbot to lessen the workload on employees, a participatory process would most likely include the call center employee, a member from the leadership team, and customers that may interact with the chatbot, while agile may only require a person to act as the call center employee representative.
The second differentiating feature builds off the first reason. Expansive representation of the communities is a hallmark of participatory design because many of its original use cases revolve around expansive urban development projects, such as public space or education design where multiple—and oftentimes—competing stakeholder views exist.16 This is not to imply that agile is not suited for complex implementations. Instead, it highlights that participatory design is rooted in complex design processes where straightforward answers do not always exist. Taken together, agile, participatory, and HCD can augment and complement one another.
In the past, health communications relied on one-size-fits all, one-way messaging to deliver health information—unsurprisingly, to generally disappointing results.17 Now, the combination of AI and mobile devices afford designers the opportunity to refresh the format and offer health messaging that is tailored to individuals. But research shows that better technology isn’t enough to engender positive change because users frequently express feelings of information overload and uncertainty on how to alter behavior.18
Regardless of the technology, AI tools still need good design. This means accounting for user health literacy levels, cultural relevancy, and engaging workflows.19
Crohn’s disease, an incurable bowel disease that exhibits recurring and debilitating symptoms, can be very difficult to treat as symptoms and severity vary greatly amongst patients.20 It also requires patients to meticulously track “observations of daily living” (ODLs), submit regular blood tests and weight levels, and maintain a very close line-of-communication with their treating physician.
These complex requirements led a diverse team of medical experts, information technologists, and designers to employ participatory design to build the AI-based ChronologyMD system.21 The goal of the system was to make treating and managing Crohn’s disease more manageable for both the patient and physician—at the individual level.
To build ChronologyMD, the team brought 30 Crohn’s patients, three gastroenterologists, and a nurse practitioner into the design process. For the first three months of the project, the designers held focus groups with patients and providers to identify pertinent ODLs to track. Starting with Crohn’s academic literature, the participants curated the ODLs cited to a more relevant list. Abdominal pain, energy levels, medications, physical activities, and sleep were a few observations identified.
For the technology design, patient and provider interviews established which mobile device platforms to use, how to best capture data, and what formats are most intuitive for deploying insights (to ensure ease of use and action). Training occurred both in-person and online. Further, online surveys, helplines, and in-person usability testing provided stakeholders a continuous stream of communication to offer opinions on feature changes and problem identification. From these feedback mechanisms, designers changed how patients and providers input and tracked data (for example, from apps and biometric devices such as fitness trackers and smart scales), and how the underlying AI would build customized recommendations. As a result, after system deployment, patients still had the ability to create customized ODLs.
Eighteen months of participatory design netted promising results. Prior to system implementation, patients tracked their ODLs 40 percent of the time. After system launch, it increased to 92 percent (culminating in over 28,000 ODLs entered into the system throughout an eight-month evaluation period).
Better data and recommendations led to healthier behavior for many of the patients. Text message reminders aided patients in medication adherence and time-sensitive appointment scheduling. AI also highlighted to users how sleep and exercise correlated with better pain management and stress reduction.
Providers, seeing how patients interacted with the system, followed up with recommendations for new algorithms that could combine patient ODLs and electronic health record data to enhance patient care.
Smart cities delve into how big data, sensor technology, and urban infrastructure can work together to improve our roads, energy consumption, and public transportation.22 Concurrently, the use of Internet of Things (IoT) sensor technology is increasingly being applied to agriculture.23 London-based researchers from Connected Seeds and Sensors combined these ideas to create more sustainable and inclusive urban food-growing communities.24
There are immense opportunities in and challenges to coordinating wide-scale urban gardens. These gardens create access to fresh, healthy food; reduce transit costs for food delivery; and enhance individual skillsets. They also can cultivate a greater sense of community among growers. But in many cases, like this study’s London borough, it’s difficult to coordinate agricultural resources and knowledge sets when large cultural and economic disparities exist. In an effort to bring these insights—and impacted communities together—the research team engaged in participatory design.
In the initial workshops, the team enlisted urban growers and seed savers (those who retain seeds and other reproductive material for future plant growth) to capture the most relevant data with IoT sensors that would later be used to inform AI recommendations for growing practices. As many are not designers by trade, the participatory design toolkit encompassed brainstorming methods to help individuals actively engage in design. One method employed is cultural probing. Specifically, participants used a camera and notebook to convey how certain images reflected a grower’s values and/or best practices. These exercises, and others like it, contributed to data categories that reflected both actionable growing tips and personal motivations for urban farming.
Through photos, notes, and audio recordings, a diverse group of 15 growers documented their insights regarding specific plants, best practices, and potential meals throughout the growing season. Supplemental community events provided potential growers with a forum for knowledge sharing. These sessions also acted as a vehicle to shape sensor technology deployment strategies. Examples include capturing data elements such as air and soil temperature and determining the necessary frequency for sensor readings.
These design sessions contributed to the Connected Seeds Library. This library consists of both digital and physical platforms (for those without access to digital technology). Urban farmers could easily access information about best practices, benefits of urban gardening, and AI-driven tips for personal gardens. For instance, AI analyzed soil readings and relayed recommended water requirements. Specifically, the farmers noticed they could scale back on water usage when they otherwise would not have known. Seed packets also included QR codes that linked to relevant pages on how to care for the respective plants.
One can envision how similar practices could be deployed for coordinating other smart city ventures. Another smart city focus, water usage, has the potential to expand on this use case by identifying areas, such as sanitation, where excess water usage may occur.
No doubt, one of the greatest areas influenced by AI-driven innovation is the workplace.25 Increasingly, we see tasks that we previously relied on people to complete rely more on AI products. We also look to AI to augment our decision-making.26 These human-AI collaborations afford us an opportunity to design our systems with the people most affected, our workforce. In this spirit, we see a few areas to incorporate workers into the design process.
The State of AI in Enterprise study reveals that 72 percent of respondents see AI-driven automation as substantially altering tasks and roles over the next year.27 Indeed, administrative tasks, such as expense processing, are increasingly automated by AI tools. As people are asked to step away from these activities to fulfill other roles, it could be incredibly insightful to understand how workflow alterations are viewed by these individuals. The reactions might include perceived threats to jobs, and potential opportunities to replace work that is viewed as unfulfilling. Like the aforementioned example of visually impaired individuals seeing their AI assistance tools as socially isolating, employees may see their human-related office connections dwindle in the face of greater task automation—that is, while technology makes certain activities easier, they may do so at the expense of social interaction. Addressing these issues in the design process can enhance the AI and workplace experience.28
For example, on one project, robotic process automation software was deployed for a group of actuaries who previously spent much of their time on routine work that was not only time intensive (upward of 200 hours) but also contained errors (up to 20 percent error rates).29 Once the process was redesigned to have the actuaries work alongside the technology, the time dropped to 15 hours with a zero percent error rate. This empowered workers to move from routine operational activities to more strategic activities within the finance function of the organization, while simultaneously improving employee engagement.
Many of us fall victim to excessive email and social media checking—both of which can lead to increased anxiety and loss of productivity.30 Participatory design can direct AI applications to make us more efficient (and happier) users of workplace technology. Recently, Google teamed up with behavioral economist Dan Ariely to use AI to better manage office calendars.31 An algorithm actively assesses an employee’s availability and blocks off time for them to work on relevant tasks. Employees could expand on this idea and help brainstorm and manage other such tools. For example, algorithms could withhold email alerts during specific meetings or employee social gatherings until a more appropriate time.
Organizations look to AI to enhance decision-making. But even well-designed AI can lead to complications in practical implementation. This can manifest as algorithm aversion. Sometimes, people prefer to lean on human judgment over algorithms. This is especially true when they see an algorithm “fail”—even if it occurs at a significantly lower rate than human prediction.32 However, one experiment found that people were more willing to accept algorithm recommendations if they could tweak the forecasts, even just minimally.33 Bringing people into the process as contributors rather than simply as passengers engenders trust, minimizes algorithm aversion, and, consequently, leads to adoption of AI insights. And even more promising, the State of AI in the Enterprise study shows that a majority of respondents (72 percent) believe that when AI truly improves decision-making, it also leads to greater job satisfaction.34
In all three instances, the upfront cost of incorporating people into AI design can be offset by better decision-making, increased efficiency, and a more fulfilling work environment.
Embedding AI into complex ecosystems is no easy feat. Individual AI-based products can succeed while broader ecosystems falter. But if we want our AI systems to measure up to our aspirations, we need to consider bottom-up, rather than top-down, approaches. Our stakeholders can be the strongest influencers in the development and trajectory of these products. Structuring their participation in a development process can pave the way to impactful solutions that push societies forward. In an era of big data and natural language processing, the best solutions might start with a one-on-one conversation.